Guías y buenas prácticas

 

Throughout the research process, there are many opportunities to apply Open Science principles. In this section, we provide a structured guide highlighting the most relevant aspects at each stage of the research process, helping you implement them in a simple and effective way.

 

Pre-registration: Plan and Protect Your Research from the Start

Pre-registration is the process of documenting your research plan before collecting data. It involves registering in advance the study hypotheses, methodology, inclusion/exclusion criteria, and the statistical analyses that will be conducted.

Why pre-register your study?

  • Greater transparency and credibility:Helps prevent publication bias and the post hoc adjustment of hypotheses and analyses.
  • Improved reproducibility: Allows other researchers to understand exactly how the study was designed and compare results with the predefined analysis plan.
  • Protection of ideas: If your pre-registration is published on a platform, it establishes a timestamped record of your research idea before others publish similar work.
  • Compliance with journal and funder requirements: An increasing number of journals and funding agencies value or require pre-registration as part of good scientific practice.

What should a pre-registration include?

  • Main hypotheses and, where applicable, exploratory hypotheses.
  • Experimental design (independent variables, dependent variables, controls, etc.).
  • Inclusion and exclusion criteria for participants and data.
  • Statistical analysis plan (which statistical tests will be used and under which conditions).
  • Number of participants and justification of sample size (power analysis when possible).
  • Details about stimuli and materials used.

Where can I pre-register my study?

Several platforms allow researchers to create a public, timestamped pre-registration, ensuring transparency in the research process. Below is a brief description of some of the most widely used platforms to help you select the most suitable one for your project.

 

  1. Open Science Framework (OSF): Versatile and flexible. OSF is an open-source platform that provides a comprehensive solution for managing research projects. It allows researchers to register studies across multiple disciplines (psychology, neuroscience, biomedicine, social sciences, etc.) using different pre-registration formats such as OSF Standard Pre-registration and the Pre-registration Challenge Template. It also offers the option to keep a pre-registration private until a specified date and assigns a DOI to each registration, making it easily citable.
  2. AsPredicted: Simplicity and speed.  AsPredicted is designed to make the pre-registration process as simple as possible. It provides a structured form with nine key questions that help define the study design without the need to write a long document. It does not require technical knowledge or prior experience with pre-registration and is particularly well suited for experimental studies with simple and straightforward designs.
  3. PROSPERO: For systematic reviews and meta-analyses. 

    PROSPERO is designed for systematic reviews and meta-analyses across multiple disciplines, including health sciences, psychology, and social sciences.It requires detailed information about: inclusion and exclusion criteria for studies, search strategies, bias assessment methods, data analysis approaches. Many high-impact journals require systematic reviews and meta-analyses to be pre-registered in PROSPERO.

 

Is it mandatory to share it?

Not necessarily. You can keep your pre-registration private until a specified date or make it public from the start. Some platforms also allow you to share it only with reviewers or collaborators before full publication.

 

Registered Reports: Peer Review Before Data Collection

The Registered Reports format is an innovative scientific publishing initiative designed to improve transparency and reproducibility in research. Unlike the traditional publication process—where studies are reviewed after data have been collected and analyzed—Registered Reports undergo peer review before data collection begins. This model represents an important shift in scientific publishing and is gaining popularity in disciplines such as neuroscience, psychology, and biomedicine.

Advantages of this publication format:

  • Reduces publication bias: Studies are not rejected for obtaining negative or non-significant results.
  • Improves methodological quality: Reviewers can identify potential design flaws before data collection begins.
  • Greater reproducibility and transparency: The entire process is documented and registered from the outset.
  • Growing recognition: Many high-impact journals have adopted this format, including Nature Human Behaviour, Cortex, and PLOS Biology.

More information about this procedure can be found in the following link:

 

Data Management Plan:

Research data management is a key aspect of Open Science. By properly organizing, documenting, and sharing research data, we ensure their reuse and accessibility for the scientific community. Open Science promotes transparency, accessibility, and collaboration in research, allowing results to be shared and verified without unnecessary barriers. In addition, funding institutions increasingly encourage open access policies in order to return public investment to society. To support this objective, the FAIR Principles provide a clear and widely accepted framework for improving the quality and sustainability of scientific data.

 

FAIR Principles: Findable, Accessible, Interoperable, Reusable

On March 15, 2016, the journal Scientific Data (Nature) published the article “The FAIR Guiding Principles for scientific data management and stewardship.” These principles establish a set of clear and measurable criteria to ensure that data are:

  • Findable

  • Accessible

  • Interoperable

  • Reusable

Below are several general recommendations to help ensure that research data follow this philosophy.

 

Findable and Accessible:

Data should be easy to locate for both humans and computer systems, ensuring they can be retrieved and used in the future. Recommended practices include:

 

  1. Assign unique and persistent identifiers (DOI, Handle, UUID).
  2. Use descriptive metadata that allow data to be effectively searched. Make metadata public even when the data themselves are restricted.

  3. Store data in reliable and well-indexed repositories to ensure long-term persistence.

Example: depositing data in repositories such as Zenodo, OSF o OpenNeuro with an assigned DOI.

 

Interoperable and Reusable:

Data should be compatible with other datasets and tools. For future reuse, they must also be well documented and properly licensed. Recommended practices include:

 

  1. Use widely recognized formats that can be processed with free software (CSV, JSON, etc.). Avoid proprietary formats when possible.
  2. Structure data consistently and follow field-specific standards (e.g., BIDS for neuroimaging).
  3. Include clear and complete documentation, explaining both how the data were collected and how they should be interpreted.
  4. Provide detailed metadata describing the study context.
  5. Attach an appropriate license specifying reuse conditions (e.g., CC-BY, CC0).
  6. Maintain version control for datasets, particularly when substantial modifications are made.

 

In summary, researchers should define a clear data structure, use open formats, and choose an appropriate repository. During data collection and analysis, each step should be documented and widely accepted community standards should be used. Finally, when publishing datasets, it is essential to include detailed metadata, persistent identifiers, and an open license. By following these principles, data will not only be better organized and accessible, but will also increase their impact and reproducibility within the scientific community.

To conclude, we emphasize the central motto underlying the FAIR principles: “As open as possible, as closed as necessary.”

 

 

Data Analysis

Data analysis is an essential stage in any scientific study. To ensure reproducibility and transparency in research, it is important to adopt good practices that allow results to be understood, verified, and replicated. Below are some useful recommendations.

 

Transparency in Analyses

  • Document every step of your data processing and statistical analysis in detail

  • Include information about the tools and software versions used

  • Clearly explain methodological decisions that may influence the interpretation of results, such as parameter choices, data filtering procedures, or statistical methods applied

 

Use of Reproducible Notebooks and Scripts

Reproducibility is a cornerstone of Open Science. Whenever possible, use tools that allow code, results, and explanations to be combined within the same document. This facilitates structured analysis workflows, enables transparent sharing of code and results, and simplifies both review and replication of the work. Some commonly used tools include:

 

  1. Jupyter Notebook (Python): Jupyter Notebook is an interactive web application that allows users to create and share documents containing code, explanatory text, visualizations, and equations in a single environment. It is widely used in scientific computing and data analysis because it allows code to be executed in independent cells, making it easy to test and modify code fragments. It supports multiple programming languages but is most commonly used with Python. It also allows integration of plots, comments, and formulas using Markdown or LaTeX, making it ideal for documenting and reproducing scientific analyses.
  2. R Markdown (R): R Markdown is a document format that combines text, R code, and visualizations in a single file, enabling the creation of reproducible reports. It uses Markdown syntax for text formatting and allows code chunks to be executed directly within the document, automatically generating results. It is particularly useful for data analysis because it allows the integration of graphs, tables, and equations into reports that can be exported to formats such as HTML, PDF, or Word.
  3. Google Colab: Similar to Jupyter Notebooks, Google Colab is a free cloud-based platform that allows users to run Jupyter notebooks without local installation or configuration. It is especially useful for research and data analysis, enabling Python code execution, visualization generation, and the handling of large datasets. It also supports real-time collaboration and integrates easily with Google Drive for project storage and sharing. Access to advanced computational resources makes it a flexible platform for developing and documenting reproducible analyses.
  4. Important: If you use other types of scripts (for example in MATLAB), ensure they are well documented and clearly organized.

 

Share and Version Your Code

Adopting good practices in data analysis not only improves the quality and credibility of research but also allows others to benefit from the knowledge generated. Open Science begins with transparency and accessibility.

Source code for analyses should be available so that others can verify and reuse your work. This increases confidence in scientific findings, facilitates replication of analyses and studies, and promotes collaboration and the joint development of tools for the scientific community. To achieve this, researchers are encouraged to use version-control platforms and open repositories where code can be hosted and maintained. Two widely used platforms include:

 

  1. GitHub is a cloud-based platform for version control and collaboration in software development and research projects. It allows users to store, manage, and share code efficiently, facilitating teamwork through tools such as Git. It is widely used in research to organize data analysis pipelines, document workflows, and ensure that projects are reproducible and accessible. GitHub also offers features such as change tracking, issue management for task coordination, and the ability to host websites or documentation using GitHub Pages.

  2. Zenodo is an open-access platform that allows researchers to store, share, and preserve datasets, publications, and source code free of charge. It is particularly valuable for the scientific community because it enables the publication of research software with DOI assignment, ensuring citability and academic recognition. It also promotes scientific reproducibility by providing a stable environment for sharing code alongside documentation and associated datasets.

 

Good Practices When Sharing Code:

  • Include a README file describing the project, usage instructions, and required dependencies. For example, if your code depends on external software such as EEGLAB, FieldTrip, or other analysis tools, this should be clearly specified.

  • Use open-source licenses. Simply uploading code to a repository is not sufficient; you should select a license appropriate for your project.

  • Maintain version control of your code.

 

 

Publication of Results

 

Open Repositories: Preprints

There are several public and free repositories, such as PsychArchives or bioRxiv.org, that allow researchers to upload manuscript versions as preprints. These platforms ensure that research outputs remain publicly available even if the final journal publication is behind a paywall. Preprints are versions of a manuscript prior to formal journal publication and are generally not considered duplicate publications by most publishers. Major publishing houses typically allow the use of preprint servers. Nevertheless, we recommend carefully reviewing the editorial policy of the journal to which you plan to submit your article.

To ensure that your publications remain openly accessible, we recommend uploading the original manuscript to a public repository before submitting it to any journal. Similarly, each time you revise your manuscript during the peer-review process, you should upload the updated version to the repository at the same time that the revised manuscript is submitted to the journal. You should not wait until final acceptance to do this, as some journals no longer allow repository updates at that stage. Following this procedure ensures that the accepted version of your article remains openly accessible, allowing anyone to access the results of your research.

 

UNDER DEVELOPMENT

Check back soon for new updates.