Community Engagement Grants Archives | DORA https://sfdora.org/category/community-engagement-grants/ San Francisco Declaration on Research Assessment (DORA) Fri, 14 Jun 2024 13:00:02 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://sfdora.org/wp-content/uploads/2020/11/cropped-favicon_512-1-32x32.png Community Engagement Grants Archives | DORA https://sfdora.org/category/community-engagement-grants/ 32 32 Hacia una evaluación académica justa y responsable en la Red de Centros CLACSO de Venezuela https://sfdora.org/2023/02/16/hacia-una-evaluacion-academica-justa-y-responsable-en-la-red-de-centros-clacso-de-venezuela/ Thu, 16 Feb 2023 23:16:24 +0000 https://sfdora.org/?p=157223 Scroll down for the English translation of this report: Community Engagement Grant Report: “Towards a fair and responsible academic evaluation in the Network of CLACSO Centers of Venezuela” A DORA Community Engagement Grants Report In November 2021, DORA announced that we were piloting a new Community Engagement Grants: Supporting Academic Assessment Reform program with the goal to…

The post Hacia una evaluación académica justa y responsable en la Red de Centros CLACSO de Venezuela appeared first on DORA.

]]>
Scroll down for the English translation of this report: Community Engagement Grant Report: “Towards a fair and responsible academic evaluation in the Network of CLACSO Centers of Venezuela”

A DORA Community Engagement Grants Report

In November 2021, DORA announced that we were piloting a new Community Engagement Grants: Supporting Academic Assessment Reform program with the goal to build on the momentum of the declaration and provide resources to advance fair and responsible academic assessment. In 2022, the DORA Community Engagement Grants supported 10 project proposals. The results of the Towards a fair and responsible academic evaluation in the Network of CLACSO Centers of Venezuela project are outlined below.

By María Ángela Petrizzo Paez, Annel Mejías Guiza, Ximena González Broquen, Eisamar Ochoa — Venezuela

La evaluación académica y la democratización de la ciencia constituyen una preocupación del Consejo Latinoamericano de Ciencias Sociales (CLACSO). El Foro Latinoamericano sobre Evaluación Científica (FOLEC-CLACSO) es una muestra de esto. En Venezuela existen 64 centros CLACSO, espacio que consideramos importante para iniciar un debate sobre una ciencia abierta, justa y responsable.

El proyecto “Políticas de evaluación académica aplicadas a la Red de Centros CLACSO de Venezuela”, realizado entre marzo y septiembre del 2022, respondió a esta intención. El estudio buscó describir los elementos distintivos de los procesos de evaluación académica al interior de los centros que componen esta Red. Para lograrlo, en la primera etapa de la investigación se aplicó una encuesta, con 3 secciones: (a) caracterización, (b) exploración de los mecanismos, procesos y normativas que usan para evaluar a sus investigadoras e investigadores, criterios de contratación, permanencia y promoción que estos aplican, desde un enfoque interseccional; y (c) exploración de las aspiraciones y propuestas sobre la evaluación. En la encuesta participaron 44 jefes y jefas de centros, representando un 68% de la población. En la segunda y tercera etapa, se construyó la declaración con aportes colectivos, realizados desde la encuesta y en las entrevistas hechas a 11 jefes y jefas de centros.

Este estudio genera por primera vez una caracterización de la Red de Centros CLACSO Venezuela, así como determina sus potencialidades para orientar futuras acciones en la reforma de las políticas de evaluación académica. Destacamos la diversidad de la red, ya que predomina una mayor tendencia con adscripción universitaria, mientras que 2 o 3 de cada 10 centros pertenece a algún ministerio gubernamental y un 25% tiene figuras privadas. La antigüedad de los centros ubica a la mayoría con más de 11 años, destacando los centros con más de 20 años, mientras que los centros más jóvenes son la menor tendencia. En términos cuantitativos, suman 409 mujeres, 311 varones y 3 personas con género “otro”. La mayoría tiene a estudiantes en proceso de formación, realizando mayoritariamente postgrado. Estos datos nos indican que la Red está conformada por centros con trayectoria de investigación, con un número significativo de estudiantes, investigadores e investigadoras dedicadas a las ciencias sociales.

Otra característica se centra en que la mitad de la muestra tiene entre 1 y 5 años de formar parte de CLACSO, mientras que 2 o 3 de cada 10 centros suma entre 6-10 años y, en menor proporción, tiene más de 10 años. Esto traduce que en la última década ha aumentado en casi 80% el número de centros de Venezuela que se han integrado a CLACSO. Otra cifra destacable es que 7 de cada 10 centros están en la región capital del país y en 8 de los 24 estados del país, sin haber representación de la región sur; lo que nos indica una centralización marcada.

En esta red, 8 de cada 10 centros reportan que cuentan con mujeres y hombres en su planta investigadora. La mayoría de mujeres ejerce cargos directivos, pero más del 90% no incluye consideraciones de género en los baremos de evaluación; no obstante, en la tercera parte de la encuesta, dentro de las propuestas para una evaluación académica justa y responsable, el enfoque de género resulta ser un criterio importante.

La Red cuenta con un número significativo de revistas. 5 de cada 10 centros consultados tienen una o más de 2 revistas, de las cuales 7 de cada 10 publicaciones utilizan licencia libre y la mitad se encuentra o bien indexada, o bien reposa en bases de datos, directorios y/o guías bibliográficas. Esta característica representa una oportunidad importante para fortalecer las publicaciones en acceso abierto y, además, iniciar un diálogo para diversificar sus normativas de evaluación.

Al ser una red con una mayor tendencia de centros universitarios, el sistema de clasificación del personal de investigación predominante sigue la carrera académica establecida en la legislación nacional. La educación en Venezuela es pública, por ello, consideramos importante la participación del Estado en el debate sobre la transformación en las políticas de evaluación académica.

En cuanto a los sistemas de evaluación, cabe destacar que cerca del 25% no aplica baremos para evaluar la producción de quienes investigan y un 88% no utiliza la publicación de “revistas indexadas” como dispositivo de evaluación. Entre aquellos que sí cuentan con baremos, las formas más utilizadas son: la “revisión por pares” (50%) seguido por el “nivel profesional” (27%); sigue la “experiencia” y “formación de talento humano” (25%). Los mecanismos de evaluación con menor nivel de popularidad son la “participación en revistas no científicas arbitradas”, “publicaciones no indexadas”, y las valoraciones “cienciométricas”. Solo ocho centros brindaron información precisa sobre la utilización de algún índice de referencia para las publicaciones, siendo los más populares los índices latinoamericanos que promueven el acceso abierto. En cuanto al perfil de las personas que evalúan, más del 80% son personas activas con postgrados y un 75% pertenece al mismo centro. Estas características permiten visualizar prácticas concretas de ciencia abierta y un potencial de articulación para construir olectivamente políticas de evaluación.

Sobre los procesos de promoción, un 34% no reconoce ningún producto. Aquellos que sí responden afirman que la “participación en eventos académicos” (43%) y las “actividades docentes” (41%) son los productos más reconocidos; mientras que las valoraciones “cienciométricas” y por “patentes de invención” tienen menor importancia. Sobre el tipo de autoría, la mayoría (50%) señaló calificar con el mismo nivel de importancia las autorías “individuales”, tanto como las “colectivas” y las “participativas/comunitarias”. En cuanto a los enfoques mejor valorados en los proyectos de investigación, se evidenció una mayor frecuencia en investigación aplicada (20%), investigación inter y pluridisciplinaria (16%), y con enfoques mixtos (14%). Estas singularidades indican la importancia de la difusión de la ciencia y la formación, además de la relevancia del trabajo en equipo.

La mitad de los centros CLACSO Venezuela dice regirse por el “reglamento o normativa interna” como instrumento de referencia utilizada para el proceso de evaluación, y un 23% por la “legislación” vigente. Sin embargo, un 39% informa no contar con normas, así como un 64% de los centros no cuenta con políticas de la DEI (Diversidad, Equidad e Inclusión). Solo un 36% afirma conocer la Declaración de Berlín, y un 61% indica no contar con una política de acceso abierto; sin embargo, un 89% considera que los Recursos Educativos Abiertos son instrumentos clave para el desarrollo de una ciencia más justa y participativa. Estas peculiaridades trazan un camino para profundizar el debate sobre la importancia del acceso abierto.

En la última parte de la encuesta, encontramos resultados interesantes, como la importancia dada al fomento de criterios para contribuir a generar una ciencia participativa y un conocimiento público y común. Un 89% enfatiza en la necesidad de incluir el reconocimiento y valoración de las personas que desarrollen investigaciones pertinentes, independientemente de su perfil académico, y un 93% considera que la evaluación por pares podría integrar a personas con experiencia reconocida en el tema de estudio, que no necesariamente posean títulos académicos. El 98% considera necesaria la implementación de mecanismos colaborativos y participativos para involucrar directamente a las comunidades y sujetos estudiados, y un 95% opina que las mismas deben evaluar la pertinencia que tiene la investigación para la comprensión y/o transformación de sus realidades.

En cuanto a los criterios de evaluación científica, un 98% considera que estos deberían ser construidos contextualmente, de manera participativa, con base en la realidad particular de cada caso. Esto implica la inclusión de las comunidades en la evaluación de la pertinencia. En cuanto a los tipos de indicadores a ser utilizados en estos procesos, la mayoría se inclina por considerar aquellos orientados al uso y
desarrollo de políticas públicas (70%), seguido de los indicadores cualitativos de impacto y de relevancia e interacción social de la ciencia (66%). En cuanto a mecanismos de publicación que pudieran contribuir al desarrollo de una evaluación académica justa y responsable, encabezan las publicaciones en repositorios en acceso abierto. Estas aspiraciones nos indican que la cuasi totalidad apunta al desarrollo de una ciencia participativa que incluya en la evaluación a sujetos diversos, independientemente de sus niveles de estudio, a la construcción de criterios contextuales e indicadores cualitativos, como aquellos enfocados al desarrollo de políticas públicas y de relevancia social, al fomento de las publicaciones en acceso abierto y en revistas arbitradas, mas no indexadas.

Esta investigación traza acciones, propuestas y/o recomendaciones en pro del desarrollo de una evaluación académica justa y responsable. Destacan los criterios de transparencia, transdisciplinariedad, enfoque de género, medición del impacto socio-académico y evaluación situada. Los hallazgos presentados sobre estas líneas son los que alimentan una propuesta de “Declaración en pro de una evaluación académica justa y responsable”, disponible aquí.

Los resultados de esta investigación han sido presentados en la conferencia CLACSO MÉXICO 2022, en CEISAL 2022 y en ABA 2022. Invitamos a escuchar en español los tres podcast y a visualizar las infografías con los resultados del proyecto.

About the project team

María Ángela Petrizzo Páez es coordinadora del proyecto “Políticas de evaluación académica aplicadas a la Red de Centros CLACSO de Venezuela”, profesora de la Universidad Nacional del Turismo Núcleo HELAV y está adscrita a la Dirección Nacional de Producción de Conocimiento (petrizzo@gmail.com)
Ximena González-Broquen es investigadora del proyecto, jefa del Centro de Estudio de Transformaciones Sociales del Instituto Venezolano de Investigaciones Científicas (xigonz@gmail.com)
Eisamar Ochoa es investigadora del proyecto, trabaja en el Centro de Estudio de Transformaciones Sociales del Instituto Venezolano de Investigaciones Científicas (eisamar.ochoa@gmail.com)
Annel Mejías Guiza es investigadora del proyecto, profesora del Departamento de Investigación, de la Facultad de Odontología de la Universidad de Los Andes (annelmejias@gmail.com )

Towards a fair and responsible academic evaluation in the Network of CLACSO Centers of Venezuela

In the context of the Latin American Council of Social Sciences (CLACSO), academic evaluation and democratization of science are of great concern. FOLEC-CLACSO, the Latin American Forum on Scientific Evaluation, is one example. CLACSO has 64 affiliate centers in Venezuela, which we consider crucial to promoting a discussion on the relevance of open, fair, and responsible science in our country.

The research project “Academic evaluation policies applied to the Network of CLACSO Centers in Venezuela”, carried out from March to September 2022, responded to this intention. The study objectives were to describe the distinctive features of academic evaluation processes within these centers. As a first step of the research, we conducted a survey that consisted of three sections: (a) description; (b) exploration of the mechanisms, processes, and regulations applied to evaluate researchers, hiring, permanence, and promotion criteria applied, using an intersectional approach; and (c) analysis of evaluation aspirations and proposals. Survey participants represented 68% of the population, consisting of 44 heads of centers. According to survey responses and interviews with 11 heads of centers, the statements were drafted with collective contributions in the second and third stages of the project.

In this study, CLACSO Venezuela Centers Network is characterized for the first time, as well as its potential to guide future reform of academic evaluation policies. It is important to highlight the diversity of the network, since most centers belong to universities, while two out of ten are governmental institutions and 25% are private. The majority of centers are over 11 years old, with centers over 20 years old standing out, while the youngest centers are the least common. There are 409 women, 311 men, and 3 researchers of “other” genders. Many of them are training students for research, mostly postgraduates. In light of these data, we may be able to encourage a debate on the importance of fair and responsible academic evaluation by focusing on the Network of centers with a research trajectory, with a significant number of students, researchers, and researchers dedicated to the social sciences.

Half of the participants have been belonging to CLACSO for between one and five years, while two or three out of 10 centers have been participating for between six to ten years, and to a lesser extent for more than ten years. Thus, the number of Venezuelan networks and centers joining CLACSO has increased by almost 80% in the last decade. Another noteworthy figure is that 7 out of 10 centers are in the capital region of the country and in 8 of the 24 states of the country, with no representation from the southern region, which indicates a marked centralization.

In the CLACSO Venezuela Network, 8 out of 10 centers report that they have both women and men in their research staff. The majority of women hold managerial positions, but more than 90% do not include gender considerations in the evaluation scales; however, in the third part of the survey, within the aspirations and proposals for a fair and responsible academic evaluation, the gender approach turns out to be an important criterion. This desire may become a potential for including the gender perspective in the debates.

The Network also has a significant number of journals. Five out of every ten centers consulted have one or more than two journals, of which seven out of ten use free licenses, and half of them are either indexed or are included in databases, directories, and/or bibliographic guides. This characteristic represents an important opportunity to strengthen open-access publications and, in addition, to initiate a dialogue to diversify their evaluation standards.

Being a network with a greater concentration of university centers, the predominant classification system for research personnel follows the academic career established in the national legislation. Education in Venezuela is public, according to the Bolivarian Constitution of Venezuela; therefore, we consider important the participation of the State in the debate on academic evaluation policies.

Regarding evaluation systems, it should be noted that about 25% do not apply scales to evaluate the production of those who do research and 88% do not use the publication of “indexed journals” as an evaluation device. Among those that do have scales, the most commonly used forms are: “peer review” (50%) followed by “professional level” (27%); then “experience” and “training of human talent” (25%). The evaluation mechanisms with the lowest level of popularity are “participation in non-scientific peer-reviewed journals”, “participation in non-indexed publications” and “scientometric” evaluations. Only eight centers provided precise information on the use of a reference index for the evaluation of publications, the most popular being the Latin American indexes that promote open access. As for the profile of the people who evaluate, more than 80% are active people with postgraduate degrees and 75% belong to the same center. These characteristics allow us to visualize concrete open science practices and a potential to articulate CLACSO Venezuela centers and collectively build fairer and more responsible
evaluation policies.

Regarding promotion processes, 34% do not recognize any academic product. Those who did respond stated that “participation in academic events” (43%) and “teaching activities” (41%) are the most recognized products, while “scientometric” and “invention patents” are of lesser importance. Regarding the type of authorship, the majority (50%) rated “individual” authorship as equally important as “collective” and “participatory/community” authorship. As for the most highly rated approaches in research projects, there was a higher frequency of applied research (20%), inter- and multidisciplinary research (16%), and mixed approaches (14%). These singularities indicate the importance of science dissemination and training, as well as the relevance of collective work in teams made up of professionals from different disciplines.

Half of the CLACSO Venezuela centers say that they are governed by the “internal regulations or norms” as a reference instrument used for the evaluation process, and 23% by the “legislation” in force. However, 39% report not having norms, and 64% of the centers do not have DEI (Diversity, Equity, and Inclusion) policies. Only 36% say they are aware of the Declaration of Berlin, and 61% indicate that they do not have an open access policy; however, 89% consider that Open Educational Resources are key instruments for the development of a fairer and more participatory science. These peculiarities trace a path to starting the debate on the importance of open access, integrating Venezuela into this global debate.

In the last part of the survey, we found interesting results, such as the importance given to the promotion of criteria to contribute to generating a participatory science and public and common knowledge, since 89% emphasize the need to include the recognition and valuation of people who develop relevant research, regardless of their academic profile. 98% consider necessary the implementation of collaborative and participatory mechanisms to directly involve the communities and subjects studied, as researchers and trainers. 95% believe that participatory and open mechanisms should be implemented so that the communities and subjects of study can evaluate the relevance of the research for the understanding and/or transformation of their realities. Finally, 93% consider that peer review could include people with recognized experience in the subject of study, who do not necessarily have academic degrees.

With regard to scientific evaluation criteria, 98% consider that these should be constructed contextually, in a participatory way, based on the particular reality of each case. This implies the inclusion of communities in the evaluation of the relevance, as well as people with recognized experience who do not necessarily have academic degrees. Regarding the types of indicators to be used in these processes, the majority is inclined to consider those oriented to the use and development of public policies (70%), followed by qualitative indicators of impact and relevance and social interaction of science (66%). As for publication mechanisms that could contribute to the development of a fair and responsible academic evaluation, publications in open access repositories lead the list. These aspirations indicate that almost all of them point to the development of a participatory science that includes diverse subjects in the evaluation, regardless of their levels of study, to the construction of contextual criteria and qualitative indicators, among which the importance of indicators focused on the development of public policies and social relevance, the promotion of publications in open access and in peer-reviewed journals, but not indexed, stands out.

This research also outlines actions, proposals, and/or recommendations for the development of a fair and responsible academic evaluation. We can observe that the proposed criteria are varied and include transparency, transdisciplinarity, gender approach, measurement of socio-academic impact, and situated evaluation. The findings presented above are the ones that feed a proposal of “Declaration for a fair and responsible academic evaluation”, available here.

The results of this research have been presented at the CLACSO MEXICO 2022 conference, at CEISAL 2022, and at ABA 2022. We invite you to listen to the three podcasts in Spanish and to view the infographics with the results of the project.

About the project team

María Ángela Petrizzo Páez is the coordinator of the project “Academic evaluation policies applied to the Network of CLACSO Centers in Venezuela”, a professor at the Universidad Nacional del Turismo Núcleo HELAV, and is attached to the National Directorate of Knowledge Production (petrizzo@gmail.com).
Ximena González-Broquen is a researcher of the project, and head of the Center for the Study of Social Transformations of the Venezuelan Institute for Scientific Research (xigonz@gmail.com).
Eisamar Ochoa is a project researcher, who works at the Center for the Study of Social Transformations of the Venezuelan Institute for Scientific Research (eisamar.ochoa@gmail.com).
Annel Mejías Guiza is a researcher of the project, and a professor at the Research Department, Faculty of Dentistry, Universidad de Los Andes ( annelmejias@gmail.com ).

The post Hacia una evaluación académica justa y responsable en la Red de Centros CLACSO de Venezuela appeared first on DORA.

]]>
The Colombian responsible metrics Project: towards a Colombian institutional, methodological instrument for research assessment https://sfdora.org/2023/02/16/the-colombian-responsible-metrics-project-towards-a-colombian-institutional-methodological-instrument-for-research-assessment/ Thu, 16 Feb 2023 22:46:44 +0000 https://sfdora.org/?p=157222 A DORA Community Engagement Grants Report In November 2021, DORA announced that we were piloting a new Community Engagement Grants: Supporting Academic Assessment Reform program with the goal to build on the momentum of the declaration and provide resources to advance fair and responsible academic assessment. In 2022, the DORA Community Engagement Grants supported 10 project proposals.…

The post The Colombian responsible metrics Project: towards a Colombian institutional, methodological instrument for research assessment appeared first on DORA.

]]>

A DORA Community Engagement Grants Report

In November 2021, DORA announced that we were piloting a new Community Engagement Grants: Supporting Academic Assessment Reform program with the goal to build on the momentum of the declaration and provide resources to advance fair and responsible academic assessment. In 2022, the DORA Community Engagement Grants supported 10 project proposals. The results of the The Colombian responsible metrics Project: towards a Colombian institutional, methodological instrument for research assessment are outlined below.

By César Pallares, Salim Chalela, María Alejandra Tejada, Elizabeth Bernal, César Rendón, Alida Acosta, Lorena Ruíz, Hernán Muñoz — Asociación Colombiana de Universidades, Asociación Colombiana de Editoriales Universitarias, Consorcio Colombia, CoLaV, Observatorio Colombiano de Ciencia y Tecnología, Red GCTI, COREMA (Colombia)

In March 2021, Colombia launched a new edition of its biannual assessment of research  groups. As usual, this event generated reactions from the academic community, and a debate  emerged around the suitability and use of this assessment exercise. However, that reaction  differed from previous years since the community was more organized than before, and the  critiques and recommendations were more structural and intended to contribute to the country.  Different academic networks summed up efforts to raise the level of the debate and to include  representation from those who have studied the evaluation framework and those responsible  for helping the researchers who are evaluated.  

One of those efforts was the responsible metrics initiative. Leading Colombian institutions,  such as the Colombian Association of Universities -Ascun-, the Colombian Association of  University Publishers -Aseuc-, the Colombian Observatory of Science and Technology -OCyT- , the Colombian Association of Research Managers -COREMA, the Colombian Network of  Management and Governance of Science and Technology -RedGCTI-, the Collaboratory for  Computational Social Sciences -CoLaV UdeA-, and the Consorcio Colombia, worked together  to facilitate and foster scenarios to discuss the potentialities and limitations of responsible  metrics for research evaluation in Colombia. 

The coordination team, consisting of representatives of the institutions mentioned above, was  responsible for gathering the insights and reaching the two purposes of our initiative: to  propose a policy brief aimed at changing the research assessment at the national level, and  to develop a Colombian rubric that helps institutions to design their self-assessment  framework. To reach this goal, we defined two methodological steps. First, we organized  seven international seminars in which experts shared their perspectives and experiences  around research assessment. We had the participation of Inorms, Research on Research  Institute, Ingenio Institute, the CWTS of Leiden, DORA, Folec, and others. Second, with the  contribution of universities, we organized two commissions, one to propose the policy brief  and the other to develop the Colombian rubric. 

Our first output was a concerted definition of 13 problems associated with the Colombian  research evaluation system. Those are:  

  1. Evaluation disconnected from the country’s reality  
  2. Lack of knowledge of alternative ways of doing research evaluation
  3. Standardization of the measurement method  
  4. Incentive schemes that generate inappropriate behavior 
  5. Lack of articulation among the actors of the science, technology, and  innovation system on research evaluation criteria  
  6. Lack of funding for STI  
  7. Economic interests that skew the evaluation and focus on individuals
  8. Delegitimization of evaluation as a valid exercise to promote research
  9. Economic interests of external stakeholders as responsible for the definition  of evaluation models 
  10. Definition of quantitative metrics focused on journal indexing systems that  lose sight of research quality. 
  11. Lack of open spaces to discuss evaluation models based on consensus  building. 
  12. Resistance to change in certain system actors reduces the possibility of  exploring other possibilities. 
  13. The lack of interoperability between existing information systems in the  country makes it challenging to generate alternative metrics and indicators. 

From these problems, this project found that the institutions of higher education (IHE) are  mainly affected by three of them: lack of knowledge of alternatives to assess research(2), the  national assessment ecosystem(10), and resistance to change(12). Focusing on them, we  developed a strategy to design our tool to promote assessment change in Colombian  institutions.  

The next step was to interact with international standards and rubrics. We selected three: 1)  Scope from INORMS, 2) SPACE from DORA, and 3) FOLEC. We studied their steps, the  recommendations they supplied, the cases that implemented those rubrics, and which lessons  they learned. This information allowed us to see the common points we can use in our  exercise: the need to configure an assessment committee to prevent the possible indirect  impact the assessment might cause and evaluate the assessment strategies. 

Using, as a starting point, the insights from these already developed frameworks, we settled ourselves to develop a rubric tailored to the Colombian system and research management  practices in the country. 

We develop a Colombian rubric from those inputs to help design the assessment exercise.  Our rubric has five stages. In the Ideation stage, the University creates the steering  committee of the assessment, with principles of diversity (gender, discipline, age, ethnicity,  among others), whose role is to define why the evaluation is necessary for the University. In  the Design phase, the University establishes different options using design thinking tools to  solve the institutional challenge that requires an assessment. The selected way is tested in 

the pilot stage to find unintended outcomes, identify the public that might be discriminated  against by the review, and receive feedback on the process. In the implementation stage,  the evaluation procedure is carried out by the University, but it should include the possibility  of being changed if the institution finds any significant problem; in this sense, the process to  change the evaluation should be clear from the beginning for all stakeholders. Finally, the  evaluation stage tries to understand what worked and did not in the assessment, so the  institution learns for future assessment exercises.  

Once the rubric was completed, we organized five focus groups with stakeholders and experts  on research metrics. They gave us feedback to improve the tool and alerted us about  shortcomings its implementation might carry. Lastly, six months later (July 2022), we ran a  workshop where research vice presidents gathered to discuss responsible metrics. To analyze  our proposal, we organized the workshop around two moments:  

First, presentation of responsible metrics and rubrics: the first activity we conducted was to  present the responsible metrics framework and the Colombian initiative. We focused more on  describing SCOPE, SPACE, FOLEC, and our proposal. 

Second, working groups around selected topics: we organized eight working groups, each  responsible for analyzing a specific hypothetical scenario. Each group had to solve three  challenges: 1) Select the principles that should orient research evaluation, 2) Define the  conception of quality and which should be the desired characteristics for the University in that  scenario, 3) Construct the profiles of the members of the steering committee for the  assessment scenario. 

The topics around which the workings groups were working are:  

  1. Research Awards (César Pallares) 
  2. Select which research projects should get a (Salím Chalela) 
  3. Hire new professors at the University (Hernán Muñoz) 
  4. Promotions in the research career (Elizabeth Bernal) 
  5. To give incentives (financial or not) to increase research performance (María Alejandra  Tejada) 
  6. To select postdocts or Ph.D. holders to work in the institution (Alida Acosta)
  7. Define the criteria to select the research papers that could get their APC funded (César  Rendón) 
  8. To select research books to be published (Lorena Ruíz) 

This workshop was a terrific opportunity to show the research directives that new frameworks  to assess research is possible. The next step was to build a website for the initiative, making  it easier for the scientific community to access information on responsible metrics in Spanish  and to see the alternatives they can use to assess their research performance (https://ww.metricasresponsables.co). We produced this website thanks to the support of  DORA’s community grant.  

In addition, we developed resources that can help researchers to understand responsible  metrics and institutions to apply new frameworks to access research. To do that, we have  developed infographic materials to disseminate the logic of responsible metrics and the results  of our initiative. We are pleased with the results of our work. Responsible metrics are now a known concept in  the Colombian system of R&D, and increased agents are exploring it to change their  evaluation practices. We understand that the work does not stop with what we have done, but  our goals have changed as we finish this project. First, we will update our resources and our  website with new developments that might be built in our country (for example, some  institutions members of this initiative are working on technical guidelines of research metrics)  or at the international level (as the new toolkits that DORA has been working on). We look to  supply a website that researchers can see as an updated source of information. Second, as  organizations, we hope to keep our efforts to promote and disseminate the use of responsible  metrics in institutions so the rubrics keep momentum. Finally, we will contribute to the  academic community by analyzing responsible metrics and the experiences we gained with  this initiative. Therefore we can contribute to expanding the available knowledge about new  ways to assess and measure research.

The post The Colombian responsible metrics Project: towards a Colombian institutional, methodological instrument for research assessment appeared first on DORA.

]]>
Young researchers in action: the road towards a new PhD evaluation https://sfdora.org/2023/02/16/young-researchers-in-action-the-road-towards-a-new-phd-evaluation/ Thu, 16 Feb 2023 22:41:51 +0000 https://sfdora.org/?p=157217 A DORA Community Engagement Grants Report In November 2021, DORA announced that we were piloting a new Community Engagement Grants: Supporting Academic Assessment Reform program with the goal to build on the momentum of the declaration and provide resources to advance fair and responsible academic assessment. In 2022, the DORA Community Engagement Grants supported 10 project proposals.…

The post Young researchers in action: the road towards a new PhD evaluation appeared first on DORA.

]]>

A DORA Community Engagement Grants Report

In November 2021, DORA announced that we were piloting a new Community Engagement Grants: Supporting Academic Assessment Reform program with the goal to build on the momentum of the declaration and provide resources to advance fair and responsible academic assessment. In 2022, the DORA Community Engagement Grants supported 10 project proposals. The results of the Young researchers in action: the road towards a new PhD evaluation project are outlined below.

By Inez Koopman and Annemijn Algra — Young SiT, University Medical Center Utrecht (Netherlands)

Less emphasis on bibliometrics, more focus on personal accomplishments and growth in research-related competencies. That is the goal of Young Science in Transition’s (Young SiT) new evaluation approach for PhD candidates in Utrecht, the Netherlands. But what do PhD candidates think about the new evaluation? With the DORA engagement grant, we did in-depth interviews with PhD candidates and found out how the new evaluation can be improved and successfully implemented.

The beginning: from idea to evaluation

Together with Young SiT, a thinktank of young scientists at the UMC Utrecht, we (Inez Koopman and Annemijn Algra) have been working on the development and implementation of a new evaluation method for PhD candidates since 2018.1 In this new evaluation, PhD candidates are asked to describe their progress, accomplishments and learning goals. The evaluation also includes a self-assessment of their competencies. We started bottom-up, small, and locally. This meant that we first tested our new method in our own PhD program (Clinical and Experimental Neurosciences, where approximately 200 PhD’s are enrolled). After a first round of feedback, we realized the self-evaluation tool (the Dutch PhD Competence Model) needed to be modernized. Together with a group of enthusiastic programmers, we critically reviewed its content, gathered user-feedback from various early career networks and transformed the existing model into a modern and user-friendly web-based tool.2

In the meantime, we started approaching other PhD programs from the Utrecht Graduate School of Life Sciences (GSLS) to further promote and enroll our new method. We managed to get support ‘higher up’: the directors and coordinators of the GSLS and Board of Studies of Utrecht University were interested in our idea. They too were working on a new evaluation method, so we decided to team up. Our ideas were transformed into a new and broad evaluation form and guide that can soon be used by all PhD candidates enrolled in one of the 15 GSLS programs (approximately 1800 PhDs).

However, during the many discussions we had about the new evaluation one question kept popping up: ‘but what is the scientific evidence that this new evaluation is better than the old one’? Although the old evaluation, which included a list of all publications and prizes, was also implemented without any scientific evidence, it was a valid question. We needed to further understand the PhD perspective, and not only the perspective from PhDs in early career networks. Did PhD candidates think the new evaluation was an improvement and if so, how it could be improved even further?

We used our DORA engagement grant to set up an in-depth interview project with a first group of PhD candidates using the newly developed evaluation guide and new version of the online PhD Competence Model. Feedback about the pros and cons of the new approach helps us shape PhD research assessment.

PhD candidates shape their own research evaluation

The main aim of the interview project was to understand if and how the new assessment helps PhD candidates to address and feel recognized for their work in various competencies. Again, we used our own neuroscience PhD program as a starting point. With the support of the director, coordinator, and secretary of our program, we arranged that all enrolled PhD candidates received an e-mail explaining that we were kicking off with the new PhD evaluation and that they were invited to combine their assessment with an interview with us.

Of the group that agreed to participate, we selected PhD candidates who were scheduled to have their annual evaluation within the next few months. As some of the annual interviews were delayed due to planning difficulties, we decided to also interview candidates who had already filled out the new form and competency tool, but who were still awaiting their annual interview with their PhD supervisors. In our selection, we made sure the group was gender diverse and included PhD candidates in different stages of their PhD trajectory.

We wrote a semi-structured interview topic guide, which included baseline questions about the demographics and scientific background of the participants, as well as in-depth questions about the new form, the web-based competency tool, the annual interview with PhD supervisors, and the written and unwritten rules PhD candidates encounter during their trajectory. We asked members of our Young SiT thinktank and the programmers of the competency tool to critically review our guide. We also included a statement about the confidentiality of the data (using only (pseudo)anonymous data), to ensure PhD candidates felt safe during our interviews and to promote openness.

We recruited a student (Marijn van het Verlaat) to perform the interviews and to analyze the data. After training Marijn how to use the interview guide, we supervised the first two interviews. All interviews were audio-taped and transcribed. Marijn systematically analyzed the transcripts according to the predefined topics in the guide, structured emerging themes and collected illustrative quotes. We both independently reviewed the transcripts and discussed the results with Marijn until we reached a consensus on the thematic content. Finally, we got in touch with a graphical designer (Bart van Dijk) for the development of the infographic. Before presenting the results to Bart, we did a creative session to come up with ideas on how to visualize the generic success factors and barriers per theme. The sketches we made during this session formed the rough draft for the infographic.

The infographic

In total, we conducted 10 semi-structured interviews. The participants were between 26 and 33 years old and six of them were female. Most were in the final phase of their PhD (six interviewees versus two first-years and two in the middle of their trajectory). Figure 1 shows the infographic we made from the content of the interviews. Most feedback we received about the form and the competence tool was positive. The form was considered short and relevant and the open questions about individual accomplishment and learning goals were appreciated. Positive factors mentioned about the tool included its ability to monitor growth, by presenting each new self-evaluation as a spider graph with a different color, and the role it plays in learning and reflection. The barriers of our new assessment approach were often factors that hampered implementation, which could be summarized in two overarching themes. The first theme was ‘PhD requirements’, with the lack of clarity about requirements often seen as the barrier. This was nicely illustrated by quotes such as “I think I need five articles before I can finish my thesis”, which underscore the harmful effect of ‘unwritten rules’ and how the prioritization of research output by some PhD supervisors prevents PhD candidates from discussing their work in various competencies. The second theme was ‘monitoring of the evaluation cycles’ and concerned the practical issues related to the planning and fulfillment of the annual assessments. Some interviewees reported that, even though they were in the final phase of their PhD, no interviews had taken place, as it was difficult to schedule a meeting with busy supervisors from different departments. Others noted that there was no time during their interview to discuss the self-evaluation tool. And although our GSLS does provide training for supervisors, PhD candidates experienced that supervisors did not know what to do and how to discuss the competency tool. After summarizing these generic barriers, we formulated facilitators for implementation, together with a call to action (Figure 1). Our recommendation to the GSLS, or in fact any graduate school implementing a new assessment, is to further train both PhD candidates and their supervisors. This not only exposes them to the right instructions, but also allows them to get used to a new assessment approach and in fact an ‘evaluation culture change’. For the developers of the new PhD competence tool, this in-depth interview project has also yielded a lot of important user-feedback. The tool is being updated with personalized features as we speak.

Figure 1. Reshaping a PhD Evaluation System

Change the evaluation, change the research culture

The DORA engagement grant enabled us to collect data on our new PhD evaluation method. Next up, we will schedule meetings with our GSLS to present the results of our project and stimulate implementation of the new evaluation for all PhDs in our graduate school. And not only our graduate school has shown interest, other universities in the Netherlands have also contacted us to learn from the ‘Utrecht’ practices. That means 1800 PhD candidates at our university and maybe more in the future will soon have a new research evaluation. Hopefully this will be start of something bigger, a culture change from bottom-up, driven by PhD candidates themselves.

*If you would like to know more about our ideas, experiences, and learning, you can always contact us to have a chat!

Email addresses: a.m.algra-2@umcutrecht.nl & i.koopman-4@umcutrecht.nl

References

1. Algra AM, Koopman I en Snoek R. How young researcher scan and should be involved in re-shaping research evaluation. Nature Index, online 31 March 2020. https://www.natureindex.com/news-blog/how-young-researchers-can-re-shape-research-evaluation-universities

2. https://insci.nl/

The post Young researchers in action: the road towards a new PhD evaluation appeared first on DORA.

]]>
Institutional challenges and perspectives for responsible evaluation in Brazilian Higher Education: Projeto Métricas DORA partnership https://sfdora.org/2023/02/16/institutional-challenges-and-perspectives-for-responsible-evaluation-in-brazilian-higher-education-projeto-metricas-dora-partnership/ Thu, 16 Feb 2023 22:39:50 +0000 https://sfdora.org/?p=157219 A DORA Community Engagement Grants Report In November 2021, DORA announced that we were piloting a new Community Engagement Grants: Supporting Academic Assessment Reform program with the goal to build on the momentum of the declaration and provide resources to advance fair and responsible academic assessment. In 2022, the DORA Community Engagement Grants supported 10 project proposals.…

The post Institutional challenges and perspectives for responsible evaluation in Brazilian Higher Education: Projeto Métricas DORA partnership appeared first on DORA.

]]>

A DORA Community Engagement Grants Report

In November 2021, DORA announced that we were piloting a new Community Engagement Grants: Supporting Academic Assessment Reform program with the goal to build on the momentum of the declaration and provide resources to advance fair and responsible academic assessment. In 2022, the DORA Community Engagement Grants supported 10 project proposals. The results of the Institutional challenges and perspectives for responsible evaluation in Brazilian Higher Education: Projeto Métricas DORA partnership project are outlined below.

By Jacques Marcovitch, Justin Axel-Berg, Pedro Belasco, Dulce Silva, Elizabeth Balbachevsky, Luiz Nunes de Oliveira, Marisa Beppu, Nina Ranieri, Renato Pedrosa — Projeto Métricas/Fapesp (Brazil)

In recent years, responsible evaluation has become a topic of fierce interest in the academic community. Brazil has around 1,200 individual signatories; around the same number as France, Spain and the United Kingdom. The country also has 391 institutional signatories, more than 150 more than the United Kingdom, in second position. Despite this, actual examples of responsible evaluation are few and far between – most evaluation is heavily quantitative at course, departmental and individual level. Of the institutional signatories, the majority are academic journals and scientific associations. Just two of the country’s large public universities are themselves signatories – the University of São Paulo (USP) and the State University of Campinas (Unicamp).

Brazilian higher education is at a crucial moment in its evolution in which it finds itself called upon to justify the public investment placed in it. It must ensure that it is engaged in advancing the frontiers of knowledge and that this knowledge is spread as widely as possible and must find answers to the multiple overlapping crises afflicting Brazilian and global society. There is increasing awareness that the tools used to measure scientific performance do not conform to the values and expectations that Brazilian academia and society in general expect from it. They are, in general, too quantitative, and too focused on the wrong things.

Our project began with an exploratory qualitative survey of individual signatories at these two universities to identify the perceived barriers to implementation of the recommendations of DORA. Of a possible 140 signatories, we received 37 responses. These responses were then collated according to the SPACE rubric and turned into a briefing comprising the key observations.

This briefing was then sent to a panel of twelve specialists and senior university leaders, all with significant experience and knowledge of responsible evaluation. The panel was composed of representatives from USP, Unicamp, Unesp, UFABC, Unifesp, UFES and UFF, who were invited to share their reflections and experiences in identifying the challenges and barriers to increasing the spread of responsible evaluation. A document that highlighted the key themes identified was produced.

To maximise the institutional reach of the initiative, a decision was made to hold the public event online, allowing representatives from institutions from across Brazil to attend. One of the challenges we faced with this is that some institutions are heavily internationalised, and moderately advanced in discussions about responsible evaluation. Others, meanwhile, attend predominantly to local priorities, and are at a less developed stage. Because the interest in responsible evaluation in Brazil comes predominantly from institutions and individuals, and not from government or external requirements, the situation is highly heterogeneous. Therefore, care was taken to ensure that recommendations can be adopted by institutions with no experience in dealing with qualitative evidence alongside quantitative and those with more extensive experience.

The final public event was held online on August 19th 2022, a video recording can be found here. The event was opened by the vice rectors of three of the most important public universities in Brazil, and each of the three key priorities identified were presented by Paulo Nussenzveig (USP), Marisa Masumi Beppu (Unicamp) and Patrícia Gama (USP). The event had 214 total registrations, with around 150 in attendance, representing 93 different institutions and faculties from every region of the country. In evaluations of the event, participants were asked to identify how they planned to apply what they had learned, and plans to introduce DORA in departmental evaluation, institutional regimes, hiring processes and federal funding agency committees were identified.

Finally, these results were synthesised into a document that is intended to serve as a guide for university leaders to plan and implement more responsible evaluation practices. This document will serve as the steering document for activities by Projeto Métricas in 2023. It can be found here. This document will be hosted on the Projeto Métricas portal, and a printed edition will be produced to enable members of the Métricas community to distribute and use in institutional discussions.

From the report, three main priority areas were identified:

Awareness of responsible evaluation

Strategies are needed to raise the general awareness that can either lead up to or immediately follow adhesion to DORA. Students should be made aware of the importance of evaluation of courses. Early career researchers should be made aware of the principles of responsible evaluation, as should those entering the university and more senior members of staff engaged in evaluation itself.

Given the diversity of areas of knowledge, institution types, career trajectories and socioeconomic factors present in Brazilian higher education, a wide variety of models for evaluation need to be established to ensure that this diversity and heterogeneity of mission, value and outcome can be respected.

Training and capacity building

Beyond the lack of knowledge of DORA or other documents, there is a clear problem with a lack of experience or capacity on the part of evaluators and those being evaluated. Where evaluations with more qualitative or flexible components exist, the quality of responses and evaluations is often inadequate. The road to more responsible evaluation requires training programmes and extra education to ensure that a culture of impact driven, and responsible evaluation is successful.

Evaluators, even when faced with large volumes of qualitative information about impact, are likely to depend on “citizen bibliometrics”, and easy measures that can justify decision making, even when inappropriate.

Without clear guidance and training on how to think about, write about and gather evidence for the impact of their work, researchers submitting their work for evaluation are likely to either rely on quantitative measures, or on unsubstantiated statements. They require training from the beginning of their career to plan research projects, execute than write about them effectively.

Processes should then consider different levels of evaluation to select the appropriate instrument for measurement. While each level has specificities and peculiarities that must be considered to ensure that evaluation is appropriate, it is important that the interaction between levels is considered, ensuring that the results measured at one level contributes to the stated goals of the others. In this sense, evaluation is a holistic activity that balances individual interests with institutional goals.

Groups of evaluators should be identified who carry institutional memory and experience of previous cycles, are able to carry out the present cycle, but are also engaged in planning and giving feedback for future cycles of evaluation. This group should assess the quality of the assessment according to the stated ambition of the unit being assessed and compare the results of this assessment with other processes in different areas of knowledge and other institutions.

Evaluation needs to have meaning. This is achieved either by celebrating and valuing outstanding achievement, or by highlighting where performance did not reach its intended goal. The reasons and justification for this performance must be clearly explained and understood and must lead to clear recommendations for future cycles of evaluation.

Execution and appraisal of evaluation

To identify and evaluate what is meaningful, the ideal time cycle for evaluation must be identified, and processes planned and produced according to a timeline.

Proper planning of evaluation cycles also prevents repetition of evaluation exercises and needless duplication of processes. Given that evaluation exhaustion is a well-documented phenomenon in higher education, with staff required to fill in the same information multiple times for different purposes, minimising it increases acceptance of new processes.

The evolution of evaluation also requires careful planning of actions over the short, medium, and long term. Sudden and dramatic change will be difficult, if not impossible to enact within universities, and so a clear idea of long-term goals reinforced by short term actions and priorities.

Objectives should be discussed and constantly revised for each successive cycle of evaluation. Because institutional objectives change over time according to internal and external factors, evaluation must also change over time to reflect shifting priorities. This review should be planned during an evaluation cycle, to be ready for the following one.

The next steps…

Having launched an initiative of national reach, and a document of consensus around the challenges and possible solutions, we must now work on consolidating a network of professionals engaged in changing evaluation. Because of the high heterogeneity we identified, helping higher institutions to establish and pilot models that are appropriate for them will enable Brazil to convert this growing demand for responsible evaluation of higher education into concrete results.

Suggested citation: Projeto Métricas (2022). Institutional challenges and perspectives for responsible evaluation in Brazilian Higher Education: Projeto Métricas DORA partnership summary of findings. University of São Paulo. [pdf], Brazil available at https://metricas.usp.br/institutional-challenges-and-perspectives-for-responsible-evaluation-in-brazilian-higher-education/

The post Institutional challenges and perspectives for responsible evaluation in Brazilian Higher Education: Projeto Métricas DORA partnership appeared first on DORA.

]]>
Co-creating a Responsible Use of Metrics for Research Assessment in Colombian Science, Technology and Innovation System https://sfdora.org/2023/02/16/co-creating-a-responsible-use-of-metrics-for-research-assessment-in-colombian-science-technology-and-innovation-system/ Thu, 16 Feb 2023 22:38:06 +0000 https://sfdora.org/?p=157221 A DORA Community Engagement Grants Report In November 2021, DORA announced that we were piloting a new Community Engagement Grants: Supporting Academic Assessment Reform program with the goal to build on the momentum of the declaration and provide resources to advance fair and responsible academic assessment. In 2022, the DORA Community Engagement Grants supported 10 project proposals.…

The post Co-creating a Responsible Use of Metrics for Research Assessment in Colombian Science, Technology and Innovation System appeared first on DORA.

]]>

A DORA Community Engagement Grants Report

In November 2021, DORA announced that we were piloting a new Community Engagement Grants: Supporting Academic Assessment Reform program with the goal to build on the momentum of the declaration and provide resources to advance fair and responsible academic assessment. In 2022, the DORA Community Engagement Grants supported 10 project proposals. The results of the Co-creating a Responsible Use of Metrics for Research Assessment in Colombian Science, Technology and Innovation System project are outlined below.

By Salim Chalela Naffah (Universidad del Rosario), Maria Alejandra Tejada, Diana Lucío-Arias, César Pallares Delgado — (Colombia)

The time has come

Following Derek de Solla Price’s contributions (1), in Colombia scientific capabilities —evidenced in the human, technological and social efforts mobilized around the creation, circulation and appropriation of scientific knowledge— have grown exponentially in the last 20 years. This evidence can be found in the simplest of indicators, such as the growth of publications with at least one author affiliated to a Colombian institution in scientific journals but also in more complex ones, such as the growth of authors publishing their first article, PhD Programs, PhD graduates, involvement in international collaboration networks, research groups, or researchers in the country, just to name a few. The growth in scientific capabilities has motivated a national reflection on the most suitable infrastructure, mechanisms and tools to promote consolidation and decentralization, as until now capabilities concentrate around the biggest cities in the country.

Colciencias —Colombia´s former public organization responsible for promoting science and technology— was formalized in 1968 as a public fund attached to the Ministry of Education (2) and has adapted in order to attend the requirements of a more diverse and demanding scientific community, in one hand, and, in the other, to contribute in the transition to a knowledge base economy and society in Colombia.  In 2020 Colciencias became the Ministry for Science, Technology and Innovation.  Regardless of this transition, and the growth in the human, technological, institutional and social capabilities for generating, circulating and promoting the appropriation of scientific knowledge, in the past 20 years, less than 0,3% of Colombia´s PIB was spent on R&D (3) while 64% of research groups and 79% of researchers in the country registered their address in one of the 5 major cities of the country (4).

Until the mid-90, public policy efforts to promote R&D in Colombia followed a model of research based on small elites producing and validating the produced knowledge through peer review processes. This encouraged the consolidation of few strong, internationally visible and very specialized disciplinary closed clusters attracting most of the national funds. Additionally, a slow transition in the manufacturing and service sector of Colombia to more digital, technology and knowledge based environments, has impacted the low levels of incorporation of PhDs to sectors different than the academic one. In a more macro level, this led to the omission of the productive sector as valuable spaces for validating the application of scientific knowledge generated in universities, research centers, groups or labs of the country.  

Two important elements contributed in the first decades of this century to the discussion for a shift in the ways money was allocated to research and thus in the ways that research products and results were valued: the diversification of scientific skills and the expansion of national PhD programs —some interdisciplinary by nature— in the social and human sciences, in engineering and arts. This discussion led to the implementation of systems for the recollection of geographically dispersed information that could inform science policy; but because these information systems have been modeled, in thus used, following the natural and exact sciences, they can constrain and demotivate the plurality and diversity in the forms of generation, communication, circulation and appropriation of scientific knowledge and, in occasions, at the expense of more direct impacts in the local contexts.

At this moment, the growing number of PhD graduates from a variety of disciplines demands for a comprehensive and participative scientific policy that privileges a more diverse and informed-based allocation of resources, and integrates an unbiased system of recognitions in the ways knowledge is produced, transferred and used. In this historical and contextual background, the project Responsible Metrics was launched in 2021 and was subsequently selected as one of the initiatives to receive funding from the community engagement grants program in DORA. The project capitalized from the experiences in research, monitoring and assessment exercises from diverse sectors, institutional backgrounds, and geographical location, with the purpose to diagnose the main challenges and difficulties of the current research assessment efforts. The collective nature of the project meant that the financial resources obtained were to be used in broadening the scope of the project as to involve more actors in the reflection around responsible metrics for assessment and evaluation of research. The discussion was organized in eight meetings of the scientific commission, but also five “chairs”, which were designed as formative spaces. The funding obtained allowed as well the implementation of specific spaces for action and proposition with Vice-chancellors that concluded in doathon around specific actions that should be attended in the short term to improve the evaluation of research results. The state of the art in evaluation instruments and mechanisms was balanced against the urgent challenges in the different dimensions involved in the assessment of the research process; its results and their impacts, acknowledging their disciplinary plurality and incommensurability.  

The information generated in the different spaces was then systematized using 7 categories aligned to the recommendations gathered in the different meetings and which, in broad terms, were related to: (1) creation of spaces with the participation of multiple actors to accord evaluation principles, instruments and contexts (2) study the viability of including, among the relevant criteria in the evaluation, the local needs and demands for new scientific knowledge from the regions and diverse territories of the country (3) transform the systems of incentives and monetary recognitions for production so that it considers a broader diversity of results and not only articles in quartile-positioned journals  (4) promote the qualitative perspective in evaluation to complement traditional quantitative indicators (5) articulate assessment of higher education institutions to the efforts that have been sustained in the Ministry of Science, Technology and Innovation (6) evaluation must be robust, transparent, participative and fair to diversity and (7) implement a mechanism that allows for actor´s characterization in order to protect research diversity.    The systematization of information following the recommendations proposed allowed a schematic visualization of the interrelations among recommendations, missing elements and what other topics will be important to add to the final version of the recommendations.

Among the recurrent topics proposed by the participants in the different spaces was the need to recognize the richness in the information that the country has collected but that has lacked a deep and systematic analysis. The model to “recognize and classify” (5) researchers and research groups has allowed to capture information of more than 5,700 research groups and more than 16,500 researchers. This information should be analyzed applying analytical techniques such as text mining and triangulated with more qualitative data to comprehend the different needs to consolidate a scientific workforce and infrastructure at service of the different regions in the country.  An invitation to profit from the richness of the information collected through the different information systems, together with the quest for a critical reflection on the negative externalities that might arise from the differential valuation of research products by the characteristics of the circulation means rather than by the products characteristics alone.  

The need to break from traditional indicators built from information in traditional indexing and abstracting systems (Scopus and Web of Science primarily) and recognize other means of circulation of scientific knowledge resonated as well in different spaces. Ahead of its time, Latin- America bet for open access to scientific knowledge as a way to promote circulation and visibility through free access to scientific content.  A different rationale perhaps than today´s movements that through costly processing charges reinforce the asymmetries in the knowledge circles.  This led to a consensus of the participants in the need to raise awareness on the inconvenience, and unfairness, of valuing more a scientific contribution for the quartile it classifies than for its impact, albeit only scientific and measured in citations. 

Perhaps the most valuable contribution of this resources for the project on Responsible Metrics is the consolidation of a national network, promoted in the name of universities and higher education institutes by ASCUN and in collaboration with public entities, that will continue the constructive dialogue around a Policy Brief with a series of recommendations to orient responsible assessment of research and at the same time advocate for informed based policies instead of policies aimed at metrics. Recognition of the importance of diversity and participation, in the purposes and levels in the assessment, in the research forms, its products, their circulation, appropriation and impact, and therefore in the assessment conditions, ponderations and evaluations. The presence in this network of actors, at the national and institutional level should nurture a healthy ecosystem for the generation, circulation and appropriation of disciplinary diverse scientific knowledge.

Resources

  1. Price, D. J. de Solla
    • Price, D. J. de Solla (1963). Little Science, Big Science. New York: Columbia, University Press.
    • Price, D.J. de Solla (1965). Networks of scientific papers. Science, 149 (3683), 510-51
    • Price, D.J. de Solla (1978). Toward a Model for Science Indicators. In Y. Elkana, J.Lederberg, R.K. Merton, A. Thackray, & H. Zuckerman, H. Toward a M
  2. Its full name was Colombian Fund for Scientific Research and Special Projects “Francisco José de Caldas” (Colciencias) ( https://ocyt.org.co/wp-content/uploads/2021/06/colciencias40.pdf
  3. https://ocyt.org.co/indicadoresctei2020.ocyt.org.co/Informe%20Indicadores%20CTeI%202020%20v1.pdf
  4. https://minciencias.gov.co/la-ciencia-en-cifras/grupos
  5. scienti.minciencias.gov.co

The post Co-creating a Responsible Use of Metrics for Research Assessment in Colombian Science, Technology and Innovation System appeared first on DORA.

]]>
Creating a Platform for Dialogues on Responsible Research Assessment: The Issue Map on Research Assessment of Humanities and Social Sciences https://sfdora.org/2023/02/16/creating-a-platform-for-dialogues-on-responsible-research-assessment-the-issue-map-on-research-assessment-of-humanities-and-social-sciences/ Thu, 16 Feb 2023 22:36:09 +0000 https://sfdora.org/?p=157214 A DORA Community Engagement Grants Report In November 2021, DORA announced that we were piloting a new Community Engagement Grants: Supporting Academic Assessment Reform program with the goal to build on the momentum of the declaration and provide resources to advance fair and responsible academic assessment. In 2022, the DORA Community Engagement Grants supported 10 project proposals.…

The post Creating a Platform for Dialogues on Responsible Research Assessment: The Issue Map on Research Assessment of Humanities and Social Sciences appeared first on DORA.

]]>

A DORA Community Engagement Grants Report

In November 2021, DORA announced that we were piloting a new Community Engagement Grants: Supporting Academic Assessment Reform program with the goal to build on the momentum of the declaration and provide resources to advance fair and responsible academic assessment. In 2022, the DORA Community Engagement Grants supported 10 project proposals. The results of the Creating a Platform for Dialogues on Responsible Research Assessment: The Issue Map on Research Assessment of Humanities and Social Sciences project are outlined below.

By Futaba Fujikawa and Yu Sasaki — Kyoto University Research Administration Office (Japan)

Background and Purpose of the Project

Japan is a late comer to the debate on Responsible Research Assessment (RRA). Traditionally, institutional assessments of the research organizations have primarily been based on peer-review exercises. This is now rapidly changing. Policy makers have begun to perceive that Japanese research capability is in decline based on the reduced growth rate of journal publications by Japanese scholars. These perceptions about a decline have resulted in the increased emphasis on metric-based approaches.

It is in this context that the Science Council of Japan issued a recommendation that raised fresh questions about the use of metrics, especially for determining procedures for resource allocation. These concerns about metrics, assessments and goal setting have understandably seen an increased interest in the various issues linked to research evaluation. Gradually but steadily, the RRA agenda is gaining support. In September 2021 out of the more than 2,600 organizations that signed DORA, Japanese signers were only three. After a year, the number increased to ten. But there are still no signatories from universities or research funding agencies.

The Japan Inter-institutional Network for Social Sciences, Humanities and Arts (JINSHA), has taken the lead in incubating discussions on research evaluation exercises in the social sciences and humanities (SSH) research since 2014. More recently, the responsible metrics and the RRA have been key instrumental ideas that the network had taken up in various seminars and workshops to create a forum for continuous discussions and dialogues.

Building on the ongoing discussions and information accumulated so far, how can we move forward to the practice of RRA? To take up this challenge, we have decided to create a visually appealing “map” of the key issues and information regarding research assessment. Our aim is to build common grounds to discuss how far we can develop credible assessment exercises, while avoiding the repetition of the same arguments. By addressing gaps in existing knowledge and awareness on research assessment issues, we also intend to encourage University Research Administrators (URAs) and stakeholders to discuss how practical measures can be adopted to enable the implementation of RRA.

Project Process

The project was implemented through the following process:

Planning
In early June 2022, we held a kick-off meeting with the working group members from the JINSHA network and a meeting with the project members of nonprofit organization MIRA TUKU to discuss our purpose and necessary steps forward. The members of MIRA TUKU, who are involved in various co-creation projects, continued to collaborate with us throughout the project.

Interviews
In preparation for the mapping, we interviewed the following experts for the purpose of securing in advance knowledge and basic information that is not available in the literature regarding the assessment of the HSS research.

  • Shota FUJII, Associate Professor, Social Solution Initiative, Osaka University
  • Makoto GOTO, Associate Professor, National Museum of Japanese History
  • Ryuma SHINEHA, Associate Professor, Research Center on Ethical, Legal, and Social Issues, Osaka University

Literature Listing
The WG members worked together to compile a list of approximately 65 references and resources on the assessment of the HSS research and related topics. From those 65, the WG members voted on their recommendations and comments and narrowed the list down to 18 references.

Extracting the Issues
The project members of MIRA TUKU extracted 90 issues/discussion points from 18 selected literature and 3 interviews. The 90 issues were grouped and mapped with tentative axes.

Workshop
Based on the issues identified above and a tentative mapping, discussions were further deepened at the 14th JINSHA Meeting “The Series on Responsible Research Assessment – Creating a ‘Map’ to Move One Step Forward” (held online on July 28) with three discussion groups:

1: Visualization of research output of the HSS research
Starting from such issues as “What indicators and evidence can measure the quality of the humanities and social sciences?”, the group 1 explored what URAs can do to ensure RRA from the researchers’ standpoints. The team members of this group brought up the points including:

  • Each researcher should be able to talk about the meaning of his or her research, and URAs should be able to draw it out from the researchers. If we do not continue this effort, we will not be able to get on the foundation of research assessment let alone visualization.
  • Although there is no sufficient database, something tangible and quantitative assessment is unavoidable in order to communicate with natural sciences.
  • The humanities and social sciences are already accepted by society. In terms of understanding what people find valuable, a marketing perspective is necessary for visualization, and this is where URAs can play a role.

2: Meeting Quantitative Assessment Needs
This group explored the questions including: Even with the diversity of the HSS research, how should URAs deal with the need for institutional and individual quantitative assessment?

  • If an assessment indicator for a particular program is set as the number of Top 10% papers, we have no choice but to follow it. But it is also possible to simultaneously demonstrate other qualitative achievement that contributed to the originally set goal of the program. To do so, it is necessary to refine our methods to effectively visualize results so that they can be presented when it is necessary. URAs can play an active role in this area.
  • It would be good if the map could be used as a tool to have the same mindset when discussing responses to specific programs. Also, when we “lose sight” (when we are sweating to make our performance against an indicator look good), we can go back and ask what the limitations of the data are, what is the original purpose, and what is RRA. The map could be used as a reminder.

3: How to make a map
This group considered the map itself from an overarching perspective, and how it could help us connect the discussions and information accumulated to date to RRA practices.

  • When we try to deal with an issue, there may be some peripheral issues that become barriers. If we can add those peripheral issues in the map as well, it would help focus on issues that needs to be solved.
  • When actors in different positions (policy makers, funding agencies, university executives, researchers, and URAs) talk, they often do not engage in discussions. It would be good if a map could help them share the issues and to foster common understanding.

After discussions at the workshop, it became clearer what kind of map we should make:

  • Since issues involving research assessment are complex, the map will help locate the topics and understand which positions the discussants are taking.
  • The map can also be used as common grounds to enable discussion by visualizing various assumptions and scattered information.
  • It would be ideal to see expansion of a network that can meet for the purpose of upgrading the map and regularly confirm which part of the map has progressed.

Outcome: Issue Map on Research Assessment in the Humanities and Social Sciences

Based on the ideas on the map, we created the final map. The 29 issues are divided as “regions” and placed in relation to each other. Starting from “Purpose of Assessment” in the upper left corner, it takes us through a set of issues regarding the assessment system itself and methods like qualitative assessment and quantitative assessment. In between these two methods lie the issues regarding impact assessment and beyond that, it leads us to the larger questions regarding society.

If we compare society to the ocean, the research community cannot survive without the blessings of the “ocean”. It is also important to enrich this “ocean” – convince people that it is acceptable to invest resources in academic endeavor. Once that is done, just as clouds form from water vapor in the ocean and rain falls on the mountains to enrich the land, the discussion will return to “what is assessment in essence”, prompting a reconsideration of issues on assessment methods and leading to improvements in assessment, which may be described as a kind of research eco-system.

Figure. Issue Map on Research Assessment in the Humanities and Social Sciences Ver. 1

Future Prospects

As described above, a series of processes to create the map were carried out in a cooperative team across institutions and affiliations, with URAs playing a central role. In a situation where there is still a long way to go in implementing RRA due to various factors, we aimed to create a map that can be used for practical purposes, rather than simply organizing past discussions, by bringing together the strengths of diverse actors and deepening discussions as we approached the project. The know-how of MIRA TUKU for structuring and effectively visualizing information, the knowledge of the experts, and the many dialogues that took place during the course of this project, all contributed to the creation of this map.

 We hope that this map will show multiple paths to go beyond the issues, and that the network of discussion will continue to expand with the map as Ver. 2 and Ver. 3 are developed, reflecting the opinions of those who have used the map.

Resources

Recommendation ‘Toward Research Evaluation for the Advancement of Science: Challenges and Prospects for Desirable Research Evaluation’ (https://www.scj.go.jp/ja/info/kohyo/pdf/kohyo-25-t312-1en.pdf) Subcommittee on Research Evaluation, Committee for Scientific Community, Science Council of Japan

The post Creating a Platform for Dialogues on Responsible Research Assessment: The Issue Map on Research Assessment of Humanities and Social Sciences appeared first on DORA.

]]>
Exploring the Current Practices in Research Assessment within Indian Academia https://sfdora.org/2023/02/16/community-engagement-grant-report-exploring-the-current-practices-in-research-assessment-within-indian-academia/ Thu, 16 Feb 2023 22:31:39 +0000 https://sfdora.org/?p=157212   A DORA Community Engagement Grants Report In November 2021, DORA announced that we were piloting a new Community Engagement Grants: Supporting Academic Assessment Reform program with the goal to build on the momentum of the declaration and provide resources to advance fair and responsible academic assessment. In 2022, the DORA Community Engagement Grants supported 10 project…

The post Exploring the Current Practices in Research Assessment within Indian Academia appeared first on DORA.

]]>

 

A DORA Community Engagement Grants Report

In November 2021, DORA announced that we were piloting a new Community Engagement Grants: Supporting Academic Assessment Reform program with the goal to build on the momentum of the declaration and provide resources to advance fair and responsible academic assessment. In 2022, the DORA Community Engagement Grants supported 10 project proposals. The results of the Exploring the Current Practices in Research Assessment within Indian Academia project are outlined below.

By Suchiradipta Bhattacharjee and Moumita Koley — Indian Institute of Science (India)

Background: Standards for Scholarship

The state of research evaluation in India

Research assessment is an integral part of the vast research and academic ecosystem of India. Surprisingly though, how it is done is not well understood. Are they quantitative or qualitative  or a mix of both? Researchers associated with the system have a vague idea but scholarly literature on this subject is almost non-existent. For the most part, it is definitely quantitative metrics driven, but there are also a good number of institutions following a more qualitative approach. Be it qualitative or quantitative, the underlying factor of assessment remains the same – how popular/established a scientist is in her field and the basis of this judgment goes back to research articles, journal prestige and citations. It is therefore reasonable to enquire if the assessment process has stagnated for decades. Moreover, it is also fair to ask whether the present system encourages novel knowledge creation, innovation or social impact?

Importance of the Project

Scientists respond to incentives just as everyone else does. Is the present assessment system incentivising scientists to engage in ground breaking research? Or just rewarding them to pick the low hanging fruits that may lead to a good number of papers in reputed journals? With India’s steady investment in R&D and commitment towards increased utilization of science and technology to solve societal challenges, we felt it is necessary to understand if the present system is rewarding incremental and over-exploration of research fields rather than breakthrough or socially relevant scientific research?

Inspiration for the Project

Over-reliance on readily-available journal-based indices to assess the quality of research is now a global phenomenon and problem. India is no exception, at least partially. JIF is a number that provides a false sense of superiority to the researchers – the higher the impact factor of a journal, the higher the reputation and quality of the journal. Similarly, researchers’ performance and achievements are synonymous with high h-index scores or number of citations received.

Against this backdrop, we wanted to explore the research assessment practices followed by Indian academics, and start the necessary conversation surrounding the importance of a robust research assessment process that will encourage transformative discoveries and breakthrough science.

Project Process and Outcomes

Project Processes

  1. Workshop cum panel discussion with national funding agencies: A one-day workshop-cum-panel discussion in hybrid mode was organized in collaboration with the Department of Science and Technology (DST), Government of India, to deliberate on the research assessment practices used by India’s national funding agencies. Participants were scientific administrators from the Department of Science and Technology, Department of Biotechnology (DBT), Council of Scientific & Industrial Research (CSIR), Science and Engineering Research Board (SERB) and the Indian Council of Medical Research (ICMR). SCOPE Framework was used to understand the current practices and their strengths and weaknesses. The findings emphasized the need for an India-centric research evaluation framework that integrates the disciplinary contexts. More details of the event can be found here.
  2. Workshop on Assessment Practices in Indian Research Ecosystem: A two-day joint workshop was organized by the DST-CPR, IISc and the Indian National Young Academy of Sciences (INYAS) with INYAS members and alumni as participants. The participants deliberated on the strengths and weaknesses of the current research assessment practices in India. The discussions and recommendations of the workshop can be accessed here.
  3. Personal interviews were conducted with representatives from research institutions, academia, and funding agencies. The respondents ranged from early career researchers to senior professors who are also members of various research evaluation committees and scientific administrators of Indian funding agencies (DST & DBT). The purpose of these interviews were – (a) to understand research assessment practices in various institutions and funding agencies within the Indian research ecosystem and their strengths and weaknesses, and (b) to understand the perspective of researchers at different levels of their careers on how responsible the current research assessment practices are.
  4. Surveys: With the objective of collecting and collating more information about how research assessment is being carried out in institutions, we also initiated two surveys – one for the STEM researchers and another for academicians and researchers in the agriculture discipline. A total of 30 responses were received through the two surveys.

Challenges Encountered and Their Possible Solutions

  1. Institutional bureaucracy has been a challenge as workshops with institutions/ research councils, funding agencies, etc., have taken much longer than anticipated to be cleared through the official process. Moreover, as things are again gradually moving offline with the decline in covid-19 cases, the institutions are actively trying to clear their backlogs in holding outreach activities, making it harder to get their availability as expected.
  2. Shift from online to hybrid workshop mode: As the Covid-19 cases were declining as the project activities picked up pace, the identified stakeholders were more interested in a physical workshop for better engagement and participation. Because of this unanticipated development, one workshop was modified to hybrid/physical.

Since physical workshops need a higher budget, we co-organized them with the Department of Science and Technology, Ministry of Science and Technology, Government of India to share the cost and access the physical infrastructure required to host the same.

Outcomes of the Project

Outreach and engagement

One of the major objectives of the project was to initiate a discussion about the necessary reforms in assessment practices across the research ecosystem in India. We have been able to engage a multitude of stakeholders in the Indian research ecosystem – early and mid-career researchers, faculties from research and academic institutions, senior academicians and researchers, science administrators, and grants management teams at funding agencies.

The workshops were a success in that regard as the science administrators acknowledged the necessity of reassessment of assessment practices and taking definitive steps towards it. However, considering the magnitude of the research ecosystem in India, such activities need to be expanded to include more data points. This will help to capture a clearer picture of the research assessment system.

Output

Findings – Research assessment practices in India

Figure. Assessment practices across research and academic institutions in India

Within the institutions, the variability is much higher. For example, state and central universities are largely dependent on quantitative metrics, while institutions like IITs have a mix of both, though leaning towards a more qualitative assessment. IISc has adopted a completely qualitative, peer-reviewed process with internal and external (including those from outside India) peer reviewers.

Funding agencies in India have adopted a more qualitative approach, especially beyond the initial screening of applicants. The degree varies between different departments (DST, DBT, SERB, CSIR, ICMR, ICAR, etc.) and their efficiency in the process. The primary benchmark for research assessment is the judgment of research proposals and CVs based on expert committee opinion. However, the challenges identified are diversity of committee members (in terms of experience, inclusivity, disciplinary diversity, industry, and civil society representation), capacities in handling large numbers of proposals (for funding agencies), the personal bias of the committee, lack of awareness about open science practices, and societal impacts of research.

On the other hand, the assessment practices of the agriculture research system (Indian Council of Agricultural Research, ICAR), CSIR, Universities, and a majority of research institutions of national importance are primarily based on quantitative measures. Research institutions, especially for internal promotions and grant allocation, depend on quantitative metrics, with a few qualitative aspects. Still, various activities are considered in the process, not just numbers/ impact factors of publications. In the case of universities, evaluation criteria for promotions are very much focused on publication-based metrics.

Future Prospects

Plans for continual improvement and adaptation

This project at this point stands at an interesting junction. The initial findings are exciting, especially since quite a few  institutions and funding agencies employ a more qualitative approach in research assessment. Expanding the understanding of their approaches, challenges and opportunities will be helpful not only to recommend how they can be improved; but have the potential to be case studies for the research ecosystem globally who are looking forward to adopting a more qualitative approach of research assessment. However, to reach such a stage the whole study needs to be expanded with substantial numbers of respondents. The current study design also needs some refinement, though the basic structure can remain the same.

Future plans

We plan to apply for funding to expand this study and the initial finding will serve as the background material for a possible larger project proposal.

About the authors

Suchiradipta Bhattacharjee is an STI Senior Policy Fellow hosted at the DST- Center for Policy Research, Indian Institute of Technology, Delhi.

Moumita Koley is an STI Post-Doctoral Policy Fellow at the DST-Center for Policy Research, Indian Institute of Science, Bangalore.

The post Exploring the Current Practices in Research Assessment within Indian Academia appeared first on DORA.

]]>
Psicología (con)Ciencia Abierta (Argentina): an event to advance the implementation of open science practices and research assessment reform in psychology and the social sciences https://sfdora.org/2023/02/16/community-engagement-grant-report-psicologia-conciencia-abierta-argentina-an-event-to-advance-the-implementation-of-open-science-practices-and-research-assessment-reform-in-psychology-and-the/ Thu, 16 Feb 2023 22:23:05 +0000 https://sfdora.org/?p=157207 A DORA Community Engagement Grants Report In November 2021, DORA announced that we were piloting a new Community Engagement Grants: Supporting Academic Assessment Reform program with the goal to build on the momentum of the declaration and provide resources to advance fair and responsible academic assessment. In 2022, the DORA Community Engagement Grants supported 10 project proposals.…

The post Psicología (con)Ciencia Abierta (Argentina): an event to advance the implementation of open science practices and research assessment reform in psychology and the social sciences appeared first on DORA.

]]>

A DORA Community Engagement Grants Report

In November 2021, DORA announced that we were piloting a new Community Engagement Grants: Supporting Academic Assessment Reform program with the goal to build on the momentum of the declaration and provide resources to advance fair and responsible academic assessment. In 2022, the DORA Community Engagement Grants supported 10 project proposals. The results of the “Psicología (con)Ciencia Abierta” Open Science Practices for Research and Research Assessment in Psychology and the Social Sciences project are outlined below.

By Nicolás Alessandroni, María Cristina Piro, Xavier Oñativia, Constanza Zelaschi, Maximiliano Vietri, Iván Suasnábar — Faculty of Psychology, Universidad Nacional de La Plata (Argentina)

Background

On September 16, 2022, the Faculty of Psychology of the Universidad Nacional de La Plata (UNLP, Argentina) held the virtual event “Psicología (con)Ciencia Abierta: Open Science Practices for Research and Research Assessment in Psychology and the Social Sciences”. The event, funded through a DORA Community Engagement Grant, was organized by a group of administrators and open science advocates coordinated by Dr. Nicolás Alessandroni. Over eight hours (10 AM–6 PM; GMT-3), attendees enjoyed one workshop and two panel discussions on open science and research assessment, with the presence of national and international Spanish-speaking experts. This was particularly relevant because, in Argentina, there are few spaces to discuss the implementation of open science and research assessment best practices in psychology and the social sciences.

Consistent with the spirit of the Declaration on Research Assessment (DORA), the event’s aim was twofold. On the one hand, to enable participants to identify and reflect on best practices for conducting and evaluating scientific research in psychology and the social sciences. On the other hand, to lay the foundations of an action plan for improving evaluation processes of academic production in the local context.

Although the event was open to all and had a virtual format to maximize the participation of all interested parties, it was designed with the academic community of the Faculty of Psychology UNLP—the host institution—in mind. Accordingly, three distinct groups of people were identified and targeted by the event: faculty members, researchers, and alumni, graduate and undergraduate students, and administrators of universities and research institutions. All the contents of the event were distributed through a website, an email account (ciencia.abierta@psico.unlp.edu.ar), and different social media accounts (i.e., Facebook and  Instagram profiles for the event; Instagram profile and website of the Faculty of Psychology UNLP).

Project processes and outcomes

The outcomes of the event included a workshop and two panel discussions:

  • The workshop “Una invitación a la ciencia abierta” [“An invitation to open science”] took place between 11 am and 1 pm, with presentations by Dr. Remedios Melero (Spain), Dr. Antonio Laguna-Camacho (Mexico), Dr. Gonzalo Villareal (Argentina), and Lic. Fernando Tonini (Argentina). This introductory session connected many students and researchers with open science for the first time. At the beginning of their presentations, the speakers provided a personal definition of “open science”. They also identified the practices it comprises and described the links between open science and research evaluation. In addition, they provided references to a variety of essential resources so that those unfamiliar with open science could learn more after the event. The attendees actively participated by posting their comments and questions on a virtual wall created online, allowing a fluid exchange at the end of the speakers’ presentations.
  • The first panel discussion took place between 2 and 4 pm and was entitled “Desafíos para la investigación en psicología y ciencias sociales desde el paradigma de la ciencia abierta” [“Challenges for researching in psychology and the social sciences from the open science paradigm”]. It was led by Dr. Nicolás Alessandroni (Canada/Argentina). Dr. Alessandroni focused his presentation on five open science practices: (a) publishing pre-prints, (b) doing open peer review, (c) engaging in open access publishing, (d) depositing open data in open repositories, and (e) pre-registering studies. He described the specifics of each practice and presented the challenges faced by researchers in incorporating them into their daily workflows. He also discussed the interactions between increasingly widespread forms of conducting research nowadays (e.g., Big Team Science) and the growing number of open science mandates developed by governments and research institutions worldwide. Finally, considering the global progress in the implementation of open science practices and the evolving landscape of research assessment, he reflected on the importance of generating institutional criteria to assess the broad spectrum of outputs of scientific practice during academic evaluation processes.
  • “La ciencia abierta y la evaluación de la producción científica” [“Open science and research assessment”] was the title of the second panel discussion that brought together, from 4 to 6 p.m., Lic. Esp. María Cristina Piro (Argentina), Dr. Marisa de Giusti (Argentina) and Lic. Soledad Cottone (Argentina). The three panel members hold management positions in educational institutions in Argentina. This fostered a rich discussion that revolved around the reforms that management teams can encourage to promote the implementation of open science practices and more transparent and democratic methods for academic evaluation. Discussed issues included the limitations of current incentive structures, the disconnection that often exists between research activity and the needs of local communities, the lack of consideration of the social impact of research, the difficulties in making changes in the curriculum to integrate open science practices, and the applicability of the laws that regulate research practices and their evaluation in Argentina. Most notably, all these items connect well with the five dimensions considered in the SPACE Rubric to evolve academic assessment.

A total of 643 people from 94 institutions registered for the event. As for the countries represented, there were people from Argentina (538), Paraguay (54), Bolivia (14), Colombia (9), Peru (7), Uruguay (6), Ecuador (4), Spain (4), Chile (2), Costa Rica (2), México (2), and Guatemala (1). A highlight is that 61% of the registrants (395) were undergraduate students, demonstrating the interest of young people—who will be the next generation of researchers—in more open, responsible, and inclusive ways of investigating and evaluating academic production. Among registrants, 126 were faculty/researchers, 45 were alumni, 39 were graduate students, 19 were administrators, 7 were administrative staff, and 12 belonged to other groups (see Figure 1).

Figure 1. Number of registrants by group

In terms of fields, 596 registrants belonged to psychology, 7 to sociology, 6 to anthropology, 3 to psychopedagogy, 3 to social communication, and 28 to other areas. Overall, these data show that the event reached a varied audience, representing individuals from the local community at different stages of career development in various disciplinary fields.

Future prospects

The Faculty of Psychology UNLP has included open science and academic assessment reform as an institutional priority within its management plan 2022-2026. The event’s conclusions will inform future endeavors, including more training and discussion events and the design of institutional policies. The organizing team is currently working on transcribing the presentations that took place during the day. This material, together with illustrations specially generated by Julieta Longo after each session—openly available through a CC-BY license—, will serve as the basis for a book aimed at contributing to the promotion of open science and the reform of academic evaluation in the Spanish-speaking context (see Figure 2).

Figure 2. Illustrations by Julieta Longo, produced after each session

Those interested in contacting the organizing team can do so via email at ciencia.abierta@psico.unlp.edu.ar.

The post Psicología (con)Ciencia Abierta (Argentina): an event to advance the implementation of open science practices and research assessment reform in psychology and the social sciences appeared first on DORA.

]]>