PURPOSE
This policy is to govern the responsible use of generative AI/LLM to protect the interests of the Saint Louis Art Museum from the risks associated with the technology.
SCOPE
The scope of the AI Security Policy is to establish a framework that ensures the secure development, deployment, and operation of AI systems. This includes identifying potential risks and vulnerabilities, enforcing ethical guidelines and compliance with global security standards, and developing strategies for threat detection, prevention, and response. The policy also addresses the need for continuous education and training in AI security best practices, as well as the importance of privacy and data protection. Ultimately, the policy aims to foster a culture of security and trust in AI technologies, while promoting their responsible and beneficial use. This policy applies to all employees who currently use or anticipate the use Artificial Intelligence as part of their work flow.
This policy applies to the use of open generative AI (Gen AI) systems (e.g. ChatGPT) and any AI or machine learning (ML) models or systems the museum uses or develops internally.
DEFINITIONS
AI model: The algorithm used to interpret, assess, and respond to data sets based on the training it has received and/or any component of an information system that implements AI technology and uses computational, statistical, or machine-learning techniques to produce outputs from a given set of inputs.
AI system: Any data system, software, hardware, application, tool, or utility that operates in whole or in part using AI and/or the infrastructure that uses the AI model to produce output based on interpretations and decisions made by the algorithm.
Public AI: An AI system that a vendor makes available to any user who wants access and that collects and uses their inputs to improve the algorithm’s performance. Unlike private AI systems, public systems send data outside the organization.
Private AI: A proprietary AI system developed and used by the organization, keeping data within the company.
Responsible AI: A set of guiding principles to promote ethical use of AI.
LLM: LLM stands for Large Language Models, which are types of artificial intelligence models designed to understand, generate, and translate human language. They are trained on vast amounts of text data and can perform a variety of language-related tasks, such as summarizing documents, answering questions, and creating content.
POLICY STATEMENTS
All organizational use of AI must meet the principles defined in the Responsible AI Framework:
- Privacy: Individual privacy must be respected.
- Fairness and Bias Detection: Unbiased data must be used to produce fair predictions.
- Explainability and Transparency: Decisions or predictions should be explainable.
- Safety and Security: The system needs to be secure, safe to use, robust, and comply with all museum policies.
- Validity and Reliability: Plans must be made to monitor the data and the model.
- Accountability: The user and/or the users manager take responsibility for any decisions that are made based on the model.
Data Confidentiality
- All existing data confidentiality controls and best practices must be in place and observed when using AI as part of a business process.
- Data used to train the AI model shall be classified as restricted and must be encrypted while at rest to secure against data exfiltration by a bad actor.
- Privacy regulations and the organizational processes designed to comply with them must be followed when entering data into the AI system, especially in cases involving a public AI system (e.g. ChatGPT).
- All suspected or confirmed cases of compromised data confidentiality must be reported to Information Technology using the established channels as soon as possible.
- Data owners must give formal approval before a given data type can be used in an AI system.
- Knowledge and specific details about how the model has been trained and how it works must be kept strictly confidential, with access to such information being granted on a need-to-know basis.
Data Integrity
- Data must be verified to meet quality standards before being incorporated into organizational data repositories to avoid degrading data integrity with erroneous or otherwise low-quality inputs.
- AI-generated data must be labeled as such so it can be quickly located if associated data sets must be reviewed, corrected, adjusted, recalled, etc.
- AI system data must be audited regularly to ensure it has not been tampered with and continues to meet organizational data-integrity standards.
Data Resiliency
- AI models and training data must be backed up at least weekly.
- Recovery time objectives (RTOs) must be tested at biannually.
- Recovery point objectives (RPOs) must be tested at biannually.
IT Controls
- Regular user access monitoring will be in place for the AI system, model, and training data.
- Appropriate data access controls must be in place for the AI model and training data.
- Multifactor authentication must be used when signing into the AI system or accessing the AI model and training data.
- All sensitive data used in conjunction with the AI model must use AES-256 encryption or better.
- Encryption key management best practices must always be followed, including, but not limited to:
- Use only approved key generation methods.
- Keys will be stored only on designated repositories.
- Sending or receiving encryption keys requires the use of a secure connection.
- Records of key sharing must be accurate and up to date.
- Lost or stolen key-enabled devices must be reported immediately.
- Key management activities must be regularly logged and audited
- Follow key-rotation schedule
- Delete keys after a potential compromise
- An intrusion detection system must be in place for all AI models, systems, and training data repositories.
- AI-generated code must not be incorporated into any of SLAM’s systems without proper authorization.
- AI systems must be configured to reset if a maximum energy consumption threshold is reached.
Acceptable Use
- Employees must obtain approval from their manager and required license prior to using AI. This includes defining the purpose, reviewing data sets, and determining expected outcome.
- The employee’s manager is responsible for notifying the Privacy Officer and monitoring appropriate use of AI in their area.
- Specific AI systems are only to be used by authorized personnel who have completed appropriate training to protect data confidentiality and integrity and who only use it as part of approved business processes.
- Employees may use Microsoft Copilot for approved business processes such as research, data analysis, and communications, provided that organizational standards to protect data confidentiality and integrity, as laid out in this policy and elsewhere, are upheld.
- Employees are not permitted to enter unapproved data types into public AI systems, and the use of sensitive data is strictly prohibited.
- Any exception to the use of sensitive data in public AI systems must be formally approved by the employees manager, the data owner and the Privacy Officer before any action can occur.
- Employee use of Gen AI systems must be lawful and not jeopardize the organization’s professional reputation or brand.
- The employee(s) and their manager will be accountable for any issues arising from their elective use of Gen AI as part of business processes, including, but not limited to: copyright violations, sensitive data exposure, poor data quality, and bias or discrimination in outputs.
- Prior to use of Gen AI, employees must complete training related to data protection, privacy, data quality, data integrity, and responsible AI use.
- This training will include but is not limited to courses assigned via Easy Llama or other acceptable methods.
- Employees must not violate any privacy or data protection regulations when using Gen AI systems.
GOVERNING LAWS AND REGULATIONS
Current laws and standards apply when using AI systems.
- Copyright laws
- General Data Protection Regulation (GDPR)
- California Consumer Privacy Act of 2018 (CCPA)
- Personal Information Protection and Electronic Documents Act (PIPEDA)
- ISO/IEC 22989:2022
- ISO/IEC JTC 1/SC 42
NONCOMPLIANCE
Violations of this policy will be treated like other allegations of wrongdoing at SLAM. Allegations of misconduct will be adjudicated according to established procedures. Sanctions for noncompliance may include, but are not limited to, one or more of the following:
- Disciplinary action according to applicable SLAM policies.
- Termination of employment.
- Legal action according to applicable laws and contractual agreements.
RELEVANT PROCEDURES, STANDARDS, AND PROCESSES
- Internal Access Review Process
- Data Backup Procedure
- Incident Response Plan
RELATED POLICIES
- Appropriate Use Policy
- Access Controls and Password Policy
- Privacy Policy
- Security Policy Awareness and Security Awareness Training Policy
- IT Security Policy
- Secure Application Development Policy
APPROVAL AND OWNERSHIP
Owner |
Title |
Date |
|
|
|
Approved By |
Title |
Date |
|
|
|
REVISION HISTORY
Version |
Revision Date |
Review Date |
Reviewer/Approver |
|
|
|
|
|
|
|
|