
Written by Mika Dewitz-Cryan ,
Risk Management Consultant
03/05/2024 · 5 minute read
While use of generative AI within the design community may be new, the risk mitigation strategies required to successfully navigate this landscape are not.
The key lies in remembering that whether the design professional is relying on the assistance of a junior member of the team or a generative AI tool, it is ultimately the design professional who is responsible for the designs and documents produced. As such, prudent design professionals and firms should consider proper implementation of the following:
Have a policy in place to provide guidance on which AI tools employees are permitted to use and the parameters of that use. This might include limiting use to specifically prescribed and previously approved generative AI platforms.
Don’t just regulate, educate! Employees should be appropriately trained on the use of approved AI tools (e.g., how to properly craft prompts) to enable them to use these tools effectively. Training also provides the ideal opportunity to reinforce important firm policies, such as the dangers of using public, unapproved generative AI platforms and how to properly scrub sensitive client, project, and firm information.
For full transparency, disclose to clients up-front any use of generative AI tools or AI-aided documents, and be prepared to address any concerns or reservations clients may have.
The design firm’s designs and other documents should always comply with applicable contractual, legal, and ethical requirements, whether AI tools contributed to the final product or not. This includes respecting the intellectual property rights of others and refraining from creating designs that would infringe on copyrighted works.
The inputs required to make use of generative AI tools can violate privacy and confidentiality rights. Accordingly, care should be exercised to avoid revealing copyrighted, confidential, or proprietary information belonging to the firm, client, project, or others.
The outputs generated by AI tools pose interesting questions regarding the ownership of intellectual property rights to created works on both a domestic and international level. In the United States, the US Copyright Office’s guide, Copyright Registration Guidance: Works Containing Material Generated By Artificial Intelligence (“guide”), may prove instructive. As noted in the guide about AI-generated material: “AI technology determines the expressive elements of its output, is not the product of human authorship;” as such, it is not protected by copyright and must be disclaimed in the registration application, unless it is de minimis (too trivial or minor to merit consideration, especially in law). A work consisting of both AI-generated and human-authored material, however, may be registered if a human author selects, arranges, or modifies AI-generated material in a sufficiently creative way. Leaving aside questions of what constitutes de minimis and sufficiently creative arrangement, both of which will necessarily be determined on a case-by-case basis, there remains the question of ownership rights, to the extent that they exist. Where laws, rules, regulations, and case law remain silent on the matter, existing collaborative processes may provide some guidance. Most likely, ownership of content will follow the conventional practices currently exercised on collaborative projects. Namely, it will depend on how those rights are contractually allocated. A fair allocation of these rights might look something like this: rights to standard details will remain with the owners of those rights while rights to any newly generated and arranged content will be granted according to the terms of the parties’ agreement. It will be important to pay attention to how generative AI platforms outline the ownership of created content in their user agreements.
Design firms must have a thorough quality assurance/quality control (QA/QC) process to validate the assumptions, inputs, calculations, and outputs of the firm’s designs and documents and revise them, as needed. This remains true for any designs or documents created, in part, using generative AI tools. Proper QA/QC in the context of AI tools necessarily requires that these tools only be used to assist in tasks within the firm’s field of expertise or for which proper oversight and review can be arranged.
As was so aptly stated by the attorneys at Lee/Shoemaker PLLC in their article, “Artificial Intelligence and Natural Liability:”
While there is a role for AI within the design profession for contributions to both the mundane and the magnificent, the prudent design professional will ensure that their use of AI conforms to the designer’s professional obligations and does not create avoidable legal jeopardy.
This undertaking may not be as novel as the subject matter might suggest. Generative AI may change the design process for firms, but it does not change the risk mitigation strategies firms should employ.
For more articles and ideas on risk mitigation strategies, we encourage our policyholders to access Victor’s exclusive content thru the Victor Risk Advisory platform.