The use of AI in corporate environments has seen a mix of support and resistance in the last five years. But 2023 appears to be the year of the big AI breakout, characterized by the rapid deployment of AI tools in organizations across industries.
However, the proliferation of AI poses a variety of risks for organizations, from privacy and ethical concerns to potential inaccuracies and compliance issues. Interestingly, McKinsey reports that while there is a broad awareness of these risks, only 21% of organizations have established policies to mitigate them.
So, what could be hindering legal teams from developing and implementing AI use policies?
Developing a policy that matches the complex nature of AI technology requires input from various stakeholders, including legal experts, data scientists, privacy officers, IT professionals, and business leaders.
In many cases, legal teams face challenges in aligning the diverse perspectives, interests, and priorities of these stakeholders, hindering the consensus needed to formulate comprehensive and actionable AI use policies.
Here, we have detailed all you need to navigate these complexities as you strive to establish an effective AI use policy for your organization.
Developing an AI use policy without stakeholder buy-in
“Most organizations are realizing that they should have a policy in place for AI adoption, because, otherwise, there's a risk of customer data or confidential data being put into the public tooling.”
~ Ken Priore, ex-Director of Privacy, Atlassian
Mastering the Intersection of Law, Technology, and Privacy
The importance of an AI use policy in corporate environments is common knowledge among legal teams. It ensures compliance with established regulations, mitigates privacy and security vulnerabilities, promotes ethical practice, and enhances decision-making.
But to truly make the impact they’re designed for, every AI use policy must have one mission-critical element: stakeholder buy-in.
Stakeholder involvement in the development process ensures the AI use policy addresses the demands of prevailing business conditions, offers practical instructions, mitigates risks, and ultimately fulfills its purpose.
On the contrary, if you work without support and involvement from stakeholders, your AI policy will:
#1 Overlook practical insights
Stakeholders, including employees, managers, and senior executives, often possess valuable practical insights into the daily operations and nuances of the business. They possess unique perspectives on how AI has impacted their respective teams and the potential risks associated.
Without their input and support, the AI use policy may miss crucial details, potential challenges, or opportunities for optimization. You end up developing a policy that is disconnected from the practical realities of the organization.
#2 Face resistance and pushback
Without stakeholder buy-in, there is a higher likelihood of encountering resistance and pushback from those directly affected by the AI implementation. Employees may resist changes to their workflows or processes, leading to decreased morale and productivity.
This can hinder the successful integration of AI technologies and may even result in the abandonment of the policy altogether.
#3 Risk poor implementation
Stakeholders are vital to the seamless implementation of any policy. Their buy-in ensures the policy is effectively communicated, understood, and executed across different departments and teams.
Without this support, the implementation may be haphazard, inconsistent, and prone to errors, leading to suboptimal results and potential negative consequences.
#4 Fail to adapt to evolving business needs
Stakeholders provide valuable feedback based on their experience and changing business requirements. Without their ongoing engagement, the AI policy may become outdated and fail to adapt to evolving needs.
A policy that does not evolve with the organization's dynamics may not remain effective in addressing emerging challenges or leveraging new opportunities.
#5 Undermine trust and transparency
Implementing an AI policy without stakeholder involvement can create a sense of mistrust and secrecy. Employees may feel their concerns and perspectives are not valued, leading to speculation and skepticism about the policy's intentions and implications.
#6 Limit innovation and creativity
Stakeholders often contribute unique experiences and ideas that can drive innovation. Without their involvement, the AI policy may stifle creativity and limit the exploration of innovative uses of AI within the organization.
This lack of innovation can put the company at a competitive disadvantage in industries where AI is rapidly evolving.
#7 Create silos and division
Building a consensus and incorporating varied viewpoints into the AI use policy helps create a shared understanding and promotes collaboration across the organization.
Departments or teams that feel excluded from the AI policy development process may develop their own approaches or resist collaboration with others. This can lead to silos, hindering the organization's ability to leverage the full potential of AI through cross-functional collaboration.
Which stakeholders should you involve in the AI use policy development?
“It's always important to try and get every angle that you can. Get insight from managers, people who report to you, and also people that are comparable within or outside the legal group. Really try to get a sense in different ways."
~ Doug Luftman, ex-DGC, DocuSign
The Key to Success as an In-House Legal Counsel & Leader
Developing an effective AI use policy involves drawing from diverse perspectives, insights, and experiences offered by various stakeholders across the organization. These are individuals or groups that can affect or get affected by the AI use policy.
#1 Executive leadership
Executive leadership, including C-suite executives and top management, plays a crucial role in AI use policy development due to their strategic oversight.
Their involvement ensures alignment between AI initiatives and the organization's overall objectives. Executives bring a high-level perspective on risk tolerance, ethical considerations, and long-term business goals.
Their support is instrumental in securing necessary resources and implementing policies that reflect the organization's values.
#2 Legal team
Members of the legal team (including yourself) are crucial for ensuring compliance with established regulations related to AI use. They can identify legal risks and implications, ensuring the AI use policy adheres to data protection, privacy, and intellectual property laws.
Their expertise is crucial in drafting clear, enforceable guidelines, minimizing legal exposure, and addressing liability issues associated with AI applications.
#3 Business units
Business units are directly impacted by AI applications in their day-to-day operations. Their input is invaluable for creating a policy that is both effective and feasible in real-world scenarios.
Collaboration with business units helps identify areas where AI can enhance productivity, streamline processes, and create value. This inclusion ensures that the policy is fair, addresses business requirements, and is well-received by the teams implementing AI solutions.
#4 Human Resources (HR)
Stakeholders from the HR department are concerned with how AI policies impact workforce management, including talent acquisition, employee training, and potential job displacement.
They’re also affected by how AI policies contribute to shaping the organizational culture.
Thus, having them onboard is crucial in addressing the human element of AI implementation. They can contribute to the AI use policy by establishing guidelines for workforce training, addressing concerns related to job displacement, and ensuring ethical considerations in AI usage.
#5 Technology and data teams
Technology and data teams are at the heart of AI implementation. Their involvement in the development process ensures that the policy is not just a theoretical framework but is grounded in the practical realities of AI systems.
Their expertise is essential in defining technical standards, data governance, and ensuring the ethical use of AI.
Additionally, collaboration with technology and data teams enhances the policy's adaptability to emerging technological trends.
Tips for effective stakeholder engagement in developing AI use policies
As already established, stakeholder involvement is mission-critical in building an AI use policy that not only protects the organization from pitfalls but also encourages higher productivity and ethical practices.
Here, we’ve covered some of the best tips you should keep in mind when engaging with stakeholders to ensure the best outcomes.
#1 Plan your approach
The sooner you connect with the stakeholders for the AI use policy, the better. However, it is crucial to outline and strategize your approach for the right outcomes.
- What are your objectives for engaging each stakeholder?
- What inputs would you need from them?
- At what stage of the policy development and implementation would their input be required?
- What part of the AI use policy directly impacts each stakeholder?
Having these insights beforehand allows you to go prepared, ensuring seamless communication with your stakeholders.
#2 Embrace a culture of iteration
"These policies need to be fairly iterative. You can't be updating them all the time, or none of us would get anything done, but my guess is they're going to keep changing and iterating based on what we see and how they're working."
~ Julia Shulman, General Counsel, Telly
SpotDraft Panel and Q&A: Developing and Implementing Practical AI-Use Policies
AI technology is still evolving, and so are its regulations. Embrace an iterative approach to policy development, signaling to stakeholders that their input is not a one-time event.
This fosters a collaborative environment where policies can evolve in response to emerging challenges, technological advancements, and changing organizational needs.
#3 Keep it simple, keep it clear
“I've seen other companies' policies, and honestly, I found a lot of them very hard to follow. So, making sure people understand it is crucial.”
~ Noga Rosenthal, General Counsel and Chief Privacy Officer, Ampersand
SpotDraft Panel and Q&A: Developing and Implementing Practical AI-Use Policies
Make the AI policy documentation easily understandable for stakeholders with varying levels of technical expertise. Use plain language and clear examples to explain complex concepts.
By ensuring clarity, you facilitate greater understanding and participation from stakeholders, promoting a more inclusive policy development process.
#4 Create an inclusive environment
Create an atmosphere where stakeholders from different departments and roles feel welcome to contribute. Actively seek out and include varied ideas and opinions, ensuring that policies consider a broad range of viewpoints.
An inclusive decision-making environment enhances not only the robustness of the AI use policy but also its acceptance across departments.
#5 Speak their language
Recognize that stakeholders have different levels of expertise and priorities. Tailor your communication to resonate with each group.
Use clear, jargon-free language when discussing technical aspects with non-experts, and delve deeper into legal and compliance nuances with relevant teams.
Customizing your message ensures that everyone comprehends the importance and implications of the AI policies in their specific contexts.
No more silos
Developing an AI use policy isn't just about legal jargon and technical specifications – it's about building a shared vision for how this transformative technology will empower your organization.
Stakeholders bring unique views that enrich the policy with practical insights, ensuring its relevance and effectiveness in real-world scenarios. Without this collaborative approach, organizations risk overlooking crucial details, facing employee resistance, and struggling with poor policy implementation.
By actively involving stakeholders in the process, you not only enhance the robustness of your organization’s AI use policy but also pave the way for innovation, adaptability, and sustained trust across the entire spectrum of the business.