Skip to content

AI-Generated Content



Appropriate Use Cases

Appropriate use cases require critical discernment of output. For example, it is acceptable to make a chart summarizing data, but you are responsible for making sure it is correct and accountable if it is not. 

All AI-generated content must be reviewed by a human before publication. 

Do 

  • Use AI for drafting, brainstorming and editing. 
  • Use UT-approved tools to ensure data security. 
  • Maintain a prompt library for consistency. 
  • Use AI to enhance accessibility and efficiency. 
  • Use AI to help you summarize and synthesize content for internal use. 
  • Use AI to improve workflow and productivity. 
  • Comply with all UT policies and applicable laws. 
  • Vet all work for consistency, accuracy and alignment with the brand.  
  • Engage in constant upskilling and development to ensure literacy and fluency with AI. 
  • Practice responsible adoption principles endorsed by the University. 
  • Protect the UT brand against AI slop — low quality content with no oversight and no genuine value to audiences. 

Don’t

  • Don’t publish final content without careful human review. 
  • Don’t input confidential or proprietary information (e.g., student records, internal strategy) when using free or non-UT-managed AI tools (e.g., ChatGPT, Claude) and without proper authorization. UT-approved tools such as UT-managed Copilot can be used for confidential or proprietary information. 
  • Don’t fabricate quotes, testimonials or personal stories. 
  • Don’t replace subject matter expertise with AI. 
  • Don’t create misleading visuals or alter real subject matter (people, events, settings, etc.).  
  • Don’t use AI to generate or modify UT trademarks or dilute the UT brand. 
  • Don’t violate copyright or licensing rules. 
  • Don’t create a representation (digital twin) of UT leadership or spokespersons.  
  • Don’t use generative AI for proprietary research, legal advice, hiring decisions, grading or academic work unless explicitly permitted. 
  • Don’t use AI for translation without full review by a native speaker for accuracy, tone and clarity.

AI-generated visuals may support early-stage creative exploration. For example, use AI to generate sample images for wireframes, early-stage drafts or mockups. But have your final assets be brand-approved images, images from a photographer or purchased artwork. 

If AI-generated visuals are published, they must meet institutional standards and include final review and approval by a human.  

Do 

  • Use AI for cosmetic touch-ups (e.g., stray hairs, shine reduction, object/blemish removal).
  • Use AI to prototype possible images, infographics or other media to use as a basis for your own creativity.
  • Use AI for background simplification (e.g., remove/adjust shadows or unwanted objects such as trash and construction debris).
  • Use AI for cropping, resizing, color and lighting adjustment.
  • Indicate when a published image is AI-generated rather than photographed or hand-created.
    • Example: “Image created with [AI tool name]” 

Don’t

  • Don’t use AI-generated images to manipulate, mislead or misinform.
  • Don’t generate images that appear photorealistic yet depict events or scenes that did not occur. This includes generating images of students, faculty or staff who do not “exist” on our campus.
  • Don’t create, modify or imitate University wordmarks, logos or any trademarked property.
  • Don’t depict future or speculative campus architecture, structures, art, monuments or grounds as if they currently exist.
  • Don’t modify existing campus structures to include non-existent features (e.g., adding floors or statues, changing signage or facades).
  • Don’t create AI slop — low quality images, videos or other content with no oversight and no genuine value to audiences.

While AI offers powerful capabilities, it also comes with risks and limitations:

  • Privacy & Security – Protect sensitive data; use only approved platforms.
  • Hallucinations – AI may generate inaccurate or fabricated content.
  • Misalignment – Outputs may not reflect UT’s values or goals.
  • Bias – AI can reflect or amplify societal biases.
  • Ethics – Responsible use is essential to maintain trust and credibility.
  • Cognitive Offloading – AI should support, not replace, critical thinking.
  • Use AI to accelerate, not replace, creative thinking.
  • Apply human judgment for tone, authenticity and brand fit.
  • Review for bias and accessibility.
  • Keep a prompt library to improve workflow.