Skip to main content

6.33 ARTIFICIAL INTELLIGENCE (AI) USAGE

Effective Date: August 20, 2025

Responsible Office: Information Technology

Scope: This policy applies to all members of the University community who engage with AI technologies in the context of university-related activities, including but not limited to academic instruction, research, administrative functions, and operational tasks.

Policy Statement: Emporia State University (ESU) strives for academic excellence and innovation through the responsible use of Artificial Intelligence (AI) technologies. This policy outlines the principles and guidelines for the ethical, secure, and effective utilization of AI tools by students, faculty, staff, and affiliates of University in accordance with the State of Kansas Generative Artificial Intelligence Policy (https://www.governor.ks.gov/home/showpublisheddocument/405/638744386434621722).

This policy governs the use and related activities of generative artificial intelligence technology by the entities. A separate policy can impose additional restrictions, but it must not conflict with the provisions of this policy.

This policy applies to all business use cases involving Emporia State University, including but not limited to:

  • Development of software code,
  • Written documentation (i.e., policy, legislation, or regulations) and correspondence (such as memorandums, letters, text messages, and emails),
  • Research,
  • Summarizing and proofreading documents,
  • Making business decisions that impact short-term or long-term activities or policies and procedures.

Responsibilities

A knowledgeable human operator should review the responses generated from generative AI outputs to ensure accuracy, appropriateness, privacy, and security before using them.

Responses generated from generative AI shall not:

  • Be used verbatim,
  • Be assumed to be truthful, credible, or accurate,
  • Be treated as the sole source of reference,
  • Be used to issue official statements (i.e. policy, legislation, or regulations),
  • Be solely relied upon for making final decisions,
  • Be used to impersonate individuals or organizations.

When interacting with AI, restricted information (RUI) should not be disclosed. The RUI is described in section 9.16 of ITEC Policy 7230A.

Guidelines

Ethical Use of AI:

  • AI tools should be used in a manner that upholds University’s values of integrity, fairness, and respect.
  • Users must avoid employing AI in ways that could result in harm, discrimination, or bias.
  • Transparency in AI-generated content is essential; users should disclose the use of AI in their work where appropriate.

Academic Integrity:

  • Students must be aware of University's Academic Dishonesty Policy when using AI in academic work.
  • Faculty members are encouraged to specify in their syllabi the acceptable use of AI tools for coursework.
  • Unauthorized use of AI in assessments or assignments may be considered academic misconduct.

Data Privacy and Security:

  • Users must not input confidential, sensitive, or personally identifiable information into AI tools without proper authorization.
  • AI tools used must comply with University's data security standards and relevant legal regulations.
  • Departments should only use University approved AI tools.

Procurement and Implementation:

  • Acquisition of AI tools for university use requires approval from the IT department to ensure compatibility with ESU's infrastructure and compliance with security protocols.
  • University IT must conduct a risk assessment and obtain necessary approvals before integrating AI solutions into University operations.

Training and Support:

  • University will provide resources and training to educate the university community on the responsible use of AI technologies.
  • Users are encouraged to participate in training sessions and stay informed about best practices and emerging issues related to AI.

Reporting Violations

Any suspected misuse of AI technologies should be reported to the Chief Information Officer (CIO) or the Chief Information Security Officer (CISO). Reports will be handled in accordance with University's established procedures for addressing policy violations.

Consequences for Violations

Violations of this policy may result in disciplinary action, including but not limited to revocation of access to AI tools, academic penalties, or employment sanctions, in accordance with University's policies and procedures.

Definitions:

Artificial Intelligence (AI) - Technologies that enable machines to perform tasks that typically require human intelligence, such as learning, reasoning, problem-solving, and language understanding. Machine learning, predictive algorithms, and large language models are used by AI to process natural language and produce text, images, or other types of media.

AI Tools - Software applications or platforms that utilize AI technologies to perform specific functions or tasks.

Procedures: Procedures related to the implementation and enforcement of this policy will developed by AI taskforce and maintained by the IT department and made available to the University community.

Related Policies: 4.71 - Academic Dishonesty Policy; 6.01 - Information Security Responsibilities and Definitions; 6.09 - Information Classification; ITEC Policy 7230A

History and Revisions

Adoption Date:
08/20/2025 [Signed by President and included in UPM as Policy 6.33]
Revision Date:
Unknown