Laboratory Policy on the Use of Generative AI

Guidelines for using generative AI tools responsibly in undergraduate, master’s, and doctoral research.

Applicable to undergraduate, master's, and doctoral research

Basic Policy

Our laboratory actively encourages the use of generative AI tools (such as ChatGPT, Claude, Gemini, Copilot, etc.) as auxiliary tools in research activities.

Generative AI is a useful instrument for improving research efficiency. However, it does not substitute for a researcher's own thinking and judgment — the researcher always bears primary responsibility for their work.

Permitted Uses

Generative AI may be used for the following auxiliary purposes:

  • Programming assistance (syntax checking, template generation, debugging support, etc.)
  • Improving written expression and assisting with English proofreading
  • Assisting with literature search and summarization
  • Assisting with figure/table creation and data organization
  • Interactive use for organizing ideas

Inappropriate Uses

The following practices are not accepted as legitimate research:

  • Submitting code, theory, or text generated by AI without understanding it
  • Delegating experimental design or discussion entirely to generative AI
  • Quoting or repurposing AI-generated content without verifying its sources

Conditions for Recognizing Research Outputs (Most Important)

Even when generative AI is used, the following conditions must be met:

  • The researcher must be able to explain the key aspects of the research (methodology, theory, evaluation metrics) in their own words
  • The researcher must be able to modify and extend the important parts of the code themselves
  • The researcher must be able to reproduce the experimental results and explain their validity

Any output that cannot be explained, modified, or reproduced will not be recognized as a research contribution.

Relationship with Open Science Policy

Our laboratory encourages the publication of research outputs (open-sourcing, arXiv submissions, etc.). As a general rule, only publicly shareable information should be used as input to generative AI. The following are exceptions:

  • Information subject to joint research agreements (NDAs, etc.)
  • Personal information or non-anonymized data
  • Manuscripts or comments under peer review
  • Confidential materials provided by external organizations

When inputting pre-publication materials into external generative AI services, consult with your supervisor as needed.

Transparency of Use

When generative AI has been used, briefly record the following in progress reports, etc.:

  • Name of the tool used
  • Purpose of use
  • Scope of adoption

When disclosure of generative AI use is required by submission guidelines or ethical standards of journals, conferences, or other venues, comply with those requirements.

Responsibility

Regardless of whether generative AI was used, the author bears responsibility for the content of their research outputs.

Students must be able to account for the accuracy, reproducibility, and proper attribution of their own work.

Furthermore, supervisors, in their role as research advisors and overseers, bear responsibility for verifying the validity, reproducibility, and ethical integrity of the research, and for providing appropriate guidance.

This policy is established to maintain research quality and integrity while enabling the sound and effective use of generative AI.