Earlier this year, the University of Miami shared an update on the use of artificial intelligence (AI) software and the potential vulnerabilities in academic and healthcare settings. Since that time, the University has continued to monitor the use cases for AI software, along with the significant increase in their availability. AI tools, such as digital assistants and generative AI-enabled digital assistants (e.g., Otter.ai) are increasing in use and are shaping the future as we know it; however, they continue to pose risk to the confidentiality of sensitive information.
Digital assistants are driven by AI created to mimic human actions, including note taking, and they collect information to train, improve, and enhance future performance. They also create written records of meetings and can autonomously send a written transcript to any email address. While this capability can be an effective tool and provide an efficiency benefit, it also creates risk of breaching state, federal, and international privacy laws.
The University does not permit the use of AI digital assistants with any topic related to patients, students, employees, or research information. This includes the use of these tools to create written transcripts of virtual meetings. Click-through licensing agreements for leading digital assistant companies warn against using this technology to interact with sensitive information, which could constitute a public disclosure and lead to a loss in our ability to protect University information, including intellectual property. To ensure proper use at the University of Miami, a policy is under development and will be forthcoming.
We will continue to monitor the changing AI environment and share updates as needed. If you have any questions or have a specific use case for this tool you would like to discuss, please contact UHealth Compliance or the Office of the Vice Provost for Research and Scholarship.
|
|
|
|