Guías y buenas prácticas

 

Throughout the research process, there are many opportunities to apply Open Science principles. This section provides a structured guide to the most relevant practices at each stage of the research process, helping you apply them clearly and effectively.

 

Pre-registration: Plan and Protect Your Research from the Start

Pre-registration is the process of documenting your research plan before collecting data. It involves registering your hypotheses, methodology, inclusion and exclusion criteria, and planned statistical analyses in advance.

Why pre-register your study?

  • Greater transparency and credibility: Helps prevent publication bias and post hoc changes to hypotheses or analyses.
  • Improved reproducibility: Allows other researchers to understand exactly how the study was designed and compare results with the predefined analysis plan.
  • Protection of ideas: If your pre-registration is published on a platform, it establishes a timestamped record of your research idea before others publish similar work.
  • Compliance with journal and funder requirements: An increasing number of journals and funding agencies value or require pre-registration as part of good scientific practice.

What should a pre-registration include?

  • Main hypotheses and, where applicable, exploratory hypotheses.
  • Experimental design (independent variables, dependent variables, controls, etc.).
  • Inclusion and exclusion criteria for participants and data.
  • Statistical analysis plan (including the tests to be used and the conditions under which they will be applied).
  • Number of participants and justification of sample size (including power analysis when possible).
  • Details of the stimuli and materials used.

Where can I pre-register my study?

Several platforms allow researchers to create a public, timestamped pre-registration, ensuring transparency in the research process. Below is a brief description of some of the most widely used platforms to help you select the one most suitable for your project.

 

  1. Open Science Framework (OSF): Versatile and flexible. OSF is an open-source platform that provides a comprehensive solution for managing research projects. It allows researchers to register studies across multiple disciplines (psychology, neuroscience, biomedicine, social sciences, etc.) using different pre-registration formats such as OSF Standard Pre-registration and the Pre-registration Challenge Template. It also offers the option to keep a pre-registration private until a specified date and assigns a DOI to each registration, making it easily citable.
  2. AsPredicted: Simple and fast.  AsPredicted is designed to make the pre-registration process as simple as possible. It provides a structured form with nine key questions that help define the study design without the need to write a long document. It does not require technical knowledge or prior experience with pre-registration, making it particularly suitable for experimental studies with relatively simple designs.
  3. PROSPERO: For systematic reviews and meta-analyses. 

    PROSPERO is designed for systematic reviews and meta-analyses across multiple disciplines, including health sciences, psychology, and social sciences. It requires detailed information on: inclusion and exclusion criteria for studies, search strategies, methods for bias assessment, and data analysis approaches. Many high-impact journals require systematic reviews and meta-analyses to be pre-registered in PROSPERO.

 

Is it mandatory to make it public?

Not necessarily. You may choose to keep your pre-registration private until a specified date or make it public from the start. Some platforms also allow it to be shared only with reviewers or collaborators before full publication.

 

Registered Reports: Peer Review Before Data Collection

Registered Reports are an innovative publication format designed to improve transparency and reproducibility in research. Unlike the traditional publication process, in which studies are reviewed after data collection and analysis, Registered Reports undergo peer review before data collection begins.

Advantages of this format:

  • Reduced publication bias: Studies are not rejected simply because they produce negative or non-significant results.
  • Improved methodological quality: Reviewers can identify potential design flaws before data collection begins.
  • Greater reproducibility and transparency: The entire process is documented and registered from the outset.
  • Growing recognition: Many high-impact journals have adopted this format, including Nature Human Behaviour, Cortex, and PLOS Biology.

More information about this procedure can be found here:

 

Data Management Plan:

Research data management is a key component of Open Science. Organizing, documenting, and sharing research data appropriately helps ensure their accessibility and reuse within the scientific community. Funding agencies increasingly promote open-access policies in order to return publicly funded research to society. To support this goal, the FAIR Principles provide a clear and widely accepted framework for improving the quality and long-term usability of scientific data.

 

FAIR Principles: Findable, Accessible, Interoperable, Reusable

On March 15, 2016, the journal Scientific Data (Nature) published the article “The FAIR Guiding Principles for scientific data management and stewardship.” These principles establish clear and measurable criteria to ensure that data are:

  • Findable

  • Accessible

  • Interoperable

  • Reusable

Below are some general recommendations for aligning research data with these principles.

 

Findable and Accessible:

Data should be easy to locate for both people and computer systems, so they can be retrieved and used in the future. Recommended practices include:

 

  1. Assign unique and persistent identifiers (DOI, Handle, UUID).
  2. Use descriptive metadata that allow data to be effectively searched. Make metadata public even when the data themselves are restricted.

  3. Store data in reliable and well-indexed repositories to ensure long-term preservation.

Example: depositing data in repositories such as Zenodo, OSF or OpenNeuro with an assigned DOI.

 

Interoperable and Reusable:

Data should be compatible with other datasets and tools. To support future reuse, they must also be well documented and appropriately licensed. Recommended practices include:

 

  1. Use widely recognized formats that can be processed with free software (CSV, JSON, etc.). Avoid proprietary formats when possible.
  2. Structure data consistently and follow field-specific standards (e.g., BIDS for neuroimaging).
  3. Include clear, complete documentation explaining how the data were collected and how they should be interpreted.
  4. Provide detailed metadata describing the study context.
  5. Attach an appropriate reuse license (e.g., CC-BY, CC0).
  6. Maintain version control for datasets, particularly when major changes are made.

 

In summary, researchers should define a clear data structure, use open formats, and choose an appropriate repository. During data collection and analysis, each step should be documented, and widely accepted standards should be followed. When publishing datasets, it is essential to include detailed metadata, persistent identifiers, and an open license. Following these principles improves the organization, accessibility, impact, and reproducibility of research data.

To conclude, we emphasize the central motto underlying the FAIR principles: “As open as possible, as closed as necessary.”

 

 

Data Analysis

Data analysis is a crucial stage in any scientific study. To ensure reproducibility and transparency in research, it is important to adopt good practices that allow results to be understood, verified, and replicated. Below are some useful recommendations.

 

Transparency in Analyses

  • Document each step of your data processing and statistical analysis in detail

  • Include information about the tools and software versions used

  • Clearly explain methodological decisions that may affect the interpretation of results, such as parameter choices, data filtering procedures, or statistical methods applied

 

Use of Reproducible Notebooks and Scripts

Reproducibility is a cornerstone of Open Science. Whenever possible, use tools that allow code, results, and explanations to be combined within the same document. This facilitates structured workflows, transparent sharing of code and results, and easier review and replication. Some commonly used tools include:

 

  1. Jupyter Notebook (Python): Jupyter Notebook is an interactive web application that allows users to create and share documents containing code, explanatory text, visualizations, and equations in a single environment. It is widely used in scientific computing and data analysis because it allows code to be executed in independent cells, making it easy to test and modify code fragments. It supports multiple programming languages but is most commonly used with Python. It also allows integration of plots, comments, and formulas using Markdown or LaTeX, making it ideal for documenting and reproducing scientific analyses.
  2. R Markdown (R): R Markdown is a document format that combines text, R code, and visualizations in a single file, enabling the creation of reproducible reports. It uses Markdown syntax for text formatting and allows code chunks to be executed directly within the document, automatically generating results. It is particularly useful for data analysis because it allows integrating graphs, tables, and equations into reports that can be exported in formats such as HTML, PDF, or Word.
  3. Google Colab: Similar to Jupyter Notebooks, it is a free cloud-based platform that lets users run Jupyter notebooks without local installation or configuration. It is especially useful for research and data analysis, enabling the execution of Python code, the generation of visualizations, and the handling of large datasets. It also supports real-time collaboration and integrates easily with Google Drive for project storage and sharing. Access to advanced computational resources makes it a flexible platform for developing and documenting reproducible analyses.
  4. Important: If you use other types of scripts (for example, in MATLAB), ensure they are well-documented and clearly organized.

 

Share and Version Your Code

Making analysis code available allows others to verify, reuse, and build on your work. This strengthens confidence in scientific findings, supports replication, and promotes collaboration.
Researchers are encouraged to use version control platforms and open repositories for hosting and maintaining code. Two widely used options are:

 

  1. GitHub is a cloud-based platform for version control and collaboration in software development and research projects. It allows users to store, manage, and share code efficiently, facilitating teamwork through tools such as Git. It is widely used in research to organize data analysis pipelines, document workflows, and ensure that projects are reproducible and accessible. GitHub also offers features such as change tracking, issue management for task coordination, and hosting websites or documentation with GitHub Pages.

  2. Zenodo is an open-access platform that allows researchers to store, share, and preserve datasets, publications, and source code free of charge. It is particularly valuable to the scientific community because it enables the publication of research software with DOI, ensuring citability and academic recognition. It also promotes scientific reproducibility by providing a stable environment for sharing code, documentation, and associated datasets.

 

Good Practices When Sharing Code:

  • Include a README file describing the project, usage instructions, and required dependencies. For example, if your code depends on external software such as EEGLAB, FieldTrip, or other analysis tools, this should be clearly specified.

  • Use open-source licenses. Simply uploading code to a repository is not sufficient; you should select a license appropriate for your project.

  • Maintain version control of your code.

 

 

Publication of Results

 

Open Repositories: Preprints

Several public, free repositories, such as PsychArchives and bioRxiv.org, allow researchers to upload manuscript versions as preprints. These platforms help ensure that research outputs remain publicly available even if the final journal article is published behind a paywall. Preprints are versions of a manuscript shared before formal journal publication and are generally not considered duplicate publications by most publishers. However, it is always advisable to check the editorial policy of the journal to which you plan to submit your work.

To help keep your publications openly accessible, we recommend uploading the original manuscript to a public repository before submitting it to a journal. Each time you revise the manuscript during peer review, you should also upload the updated version to the repository when you submit the revision to the journal. It is best not to wait until final acceptance, since some journals no longer allow repository updates at that stage. Following this procedure ensures that the accepted version of your article remains openly accessible, allowing anyone to access the results of your research.

 

UNDER DEVELOPMENT

Check back soon for new updates.