Skip Navigation and Go To Content
Handbook of Operating Procedures

Governance and Use of Artificial Intelligence

Policy Number: 235

Subject:

Artificial Intelligence

Scope:

All University faculty, staff, trainees, contractors, and business units

Date Reviewed:
November 2025
Responsible Office:
Information Technology
Responsible Executive:
Vice President and Chief Information Officer

I. POLICY AND GENERAL STATEMENT

The University of Texas Health Science Center at Houston (“University”) strives for safe, lawful, and ethical use of artificial intelligence (“AI”) in clinical care, education, research, and administration, including compliance with applicable state and federal laws and regulations.

To meet these aspirations, the University employs several strategies in the various uses of AI:

  • Inventory: All AI systems must be registered in a central inventory.
  • Continuous Oversight: All AI systems (e.g., cybersecurity, spam, analytics) must follow inventory and review processes.
  • Risk Assessment and Monitoring of Heightened Scrutiny Artificial Intelligence System: Any system that is intended to either make a consequential decision on its own or be a controlling factor in that decision (i.e., the main reason or able to change the outcome) requires impact assessments, transparency, monitoring, and documentation.
  • Ethics Standards: The University, the University community, and University partners must comply with the Artificial Intelligence Code of Ethics developed by the Texas Department of Information Resources.

This policy is intended to provide guidance to the University community concerning the development, implementation, and use of AI systems in a variety of contexts, whether acquired or developed in-house. While some guidance is provided for particular domains within this policy, additional specific policies concerning particular types of AI Systems or use of AI Systems in certain contexts may also apply. General application policies also apply in the use of AI Systems even if those policies do not expressly reference AI Systems.

All users are responsible for ensuring the safe, lawful, and ethical use of AI Systems, including assuring the particular AI System is appropriate for use in context. This means that users should not rely on versions of AI Systems that do not provide adequate protection of sensitive data processed by the AI System (as is often the case with free-to- use platforms). While AI Systems may serve as valuable aids, they are not replacements for human-centric decision-making, creativity, and judgment. All users are responsible for critically assessing AI-generated outputs to evaluate accuracy, quality, fairness, and alignment with the University’s mission.

II. DEFINITIONS

AI System: A machine-based system that for explicit or implicit objectives infers from provided information a method to generate outputs such as predictions, content, recommendations, or decisions to influence a physical or virtual environment, with varying levels of autonomy and adaptiveness after deployment. Examples of AI Systems include patient no-show predictor, medical imaging classifier, applicant resume ranker, writing/summarization assistant, quiz generator, spend anomaly detector, malicious email filter, and cohort discovery assistant.

Heightened Scrutiny AI System(s) (HSAIS): An AI System that is intended to either make a consequential decision on its own or be a controlling factor in that decision (i.e., the main reason or able to change the outcome). It does not include AI used only for narrow procedural tasks, to improve finished human work, to prepare inputs for a decision, or to detect decision-making patterns.

Impact Assessment: A pre-deployment review of risk, fairness, data use, limitations, security, and mitigation strategies.

Consequential Decision: A decision with material impact on the academic, clinical, administrative, or legal status of a person.

AI-Related Incident: An occurrence in which an AI System or its use may violate University policy, result in harm, excess technical bias or unfair outcomes, bypass established review, compromise sensitive data, operate beyond its intended use or authorized use, generate outputs by design that are unsafe; untraceable; or misleading, or create operational risk.

System Owner: The person responsible for the business function or project that depends on a system. For more information, see HOOP 175, Roles and Responsibilities for University Information Resources and University Data.

III. PROCEDURE

All users are responsible for ensuring the safe, lawful, and ethical use of AI Systems. Some areas have specific responsibilities: 

  • System Owner: Identify, classify, and report on AI Systems to Information Security.
  • Information Security: Maintain the central AI inventory, classify HSAIS, and conduct technical oversight.
  • AI Governance Committee: Reviews HSAIS impact assessments, approves AI systems, and provides guidance.
  • AI Categorization: In addition to the basic inventory and oversight framework contained within this policy, System Owners must review [AI Categorization ITPOL link] for additional requirements that may apply to their respective system based on Information Security’s categorization matrix.

     A. Inventory

The Chief Information Security Officer or designee maintains the central AI System inventory.

     B.  Continuous Oversight of AI Systems and HSAIS

Information Security will review each AI System submitted to the AI System inventory to determine HSAIS classification. While all AI Systems require standard oversight, HSAIS require added oversight, including risk assessment and monitoring. Whenever it is unclear whether a particular system meets the definition of HSAIS, users should treat the system as an HSAIS until Information Security issues a determination otherwise.

          1. Standard Oversight

All AI Systems are entered into the AI System inventory after submission by the System Owner and are subject to Information Security review. Impact assessments are not required for non-HSAIS.

In cases where an AI System is public-facing or is a controlling factor in a consequential decision, the System Owner, in coordination with Information Security, will post a notice on all related applications, Internet websites, and public computer systems, consistent with notice requirements set by the Texas Department of Information Resources.

          2. HSAIS Requirements

Minimum standards for HSAIS are defined by the Texas Department of Information Resources. The University’s HSAIS oversight program includes:

  • Impact Assessment: Prior to deployment, Information Security will coordinate and track impact assessments of each HSAIS, including assessment and documentation of the system’s known security risks, performance metrics, and transparency measures. Assessments will be conducted again at the time any material change is made to the system, any data used by the system, or the intended use of the system.
  • Monitoring: The System Owner will coordinate and track ongoing performance and fairness reviews consistent with the Artificial Intelligence System code of ethics established by the Department of Information Resources.
  • Documentation: Information Security will retain records of assessments.

     C.  Domain-Specific Guidance

Development, implementation, and use of AI Systems in specific domains can require specific considerations. This section contains some basic parameters for specific contexts. Note that the information contained within this section is not intended to be exhaustive, meaning other policies concerning particular types of AI Systems or use of AI Systems in certain contexts may additionally apply.

          1. Clinical Care

AI supports but does not replace clinician judgment. If the system impacts patient care without meaningful human review, an AI System is considered an HSAIS and is subject to corresponding added oversight.

          2. Research

Use of AI in research, writing, analysis, or publication must abide by applicable standards including those imposed by publishers, sponsoring entities, or other agencies. AI in human subjects research must follow IRB and data governance protocols.

          3. Teaching & Learning

By default, AI may be used to support teaching and learning.

  • Instructors should clarify AI expectations in the syllabus and communicate course- level guidance.
  • When AI use is allowed, students must cite AI-generated content, by appending used prompts or indicating AI-assisted sections, for example. For more information, see HOOP 186, Student Conduct and Discipline.
  • Faculty using AI in instructional materials must ensure FERPA/HIPAA compliance (as applicable) and oversee accuracy and clarity.

          4. Content Creation & Brand Standards

Questions regarding intellectual property considerations for published material containing AI-generated content such as images should be directed to the Office of Legal Affairs. Questions regarding brand standards and conventions for published material containing AI-generated content should be directed to the Office of Public Affairs.

          5. Administrative & Business Uses

AI in administrative and business uses, such as admissions, applicant management, hiring, financial aid, and benefits, must strive to be fair, tested for known biases by the developer, and transparent. AI Systems that autonomously affect decisions must meet HSAIS criteria and require additional oversight.

          6. Contracted AI Vendors

Vendors are required to disclose when provided products use AI in order to allow for proper inclusion in the AI inventory. For vendor-provided AI (especially HSAIS), contracts must require the vendor to perform impact assessments aligned with applicable state and federal laws and regulations, allow University access to documentation, comply with data privacy provisions (i.e., HIPAA, FERPA), and allow contract termination for non- compliance. Vendors must also make appropriate certification concerning fairness-aware and bias-aware audit practices.

     D.  Enforcement

Non-compliant AI systems may be disabled, removed, or deactivated at the direction of the Office of the CIO or designee.

Violations of this policy and others may lead to disciplinary action up to, and including, employee termination, discipline/dismissal of trainees, cancellation of contractor/vendor contracts, and pursuit of criminal actions where applicable.

Employees may anonymously report suspected violations to the Compliance Hotline.

     E.  References

IV. CONTACTS

    • Information Technology
    • 713-486-2220