How to manage the risk of generative Ai

How to manage the risk of generative Ai

Leaders across industries, policymakers, and academics are seeking ways to harness the power of generative AI technology, which has the potential to revolutionize various aspects of our lives. In the business world, generative AI can transform customer interactions and drive growth, with 67% of senior IT leaders prioritizing its adoption within the next 18 months. This technology is being explored in sales, customer service, marketing, commerce, IT, legal, HR, and other areas of business.

However, there is a need for secure and trusted methods for employees to utilize these technologies. Concerns about security risks and biased outcomes have been reported by 79% and 73% of senior IT leaders, respectively. Organizations must prioritize ethical, transparent, and responsible usage of generative AI to mitigate potential risks.

It’s crucial to distinguish the use of generative AI in enterprise settings from its use by individual consumers. Businesses must adhere to industry-specific regulations, considering legal, financial, and ethical implications. The accuracy, accessibility, and appropriateness of generated content are of utmost importance. Incorrect instructions for cooking a recipe may have minimal consequences compared to providing inaccurate guidance for repairing heavy machinery. Without clear ethical guidelines, generative AI can have unintended negative effects and cause harm.

Organizations require a practical framework to effectively utilize generative AI and align its goals with the core objectives of their business. This framework should encompass the impact of generative AI on various aspects such as sales, marketing, commerce, service, and IT roles.

In 2019, we introduced our trusted AI principles, including transparency, fairness, responsibility, accountability, and reliability, as a foundation for the development of ethical AI tools. While these principles are valuable, organizations must go beyond them and establish an ethical AI practice to effectively incorporate them into the development and adoption of AI technology. A mature ethical AI practice integrates these principles into responsible product development and deployment, involving disciplines like product management, data science, engineering, privacy, legal, user research, design, and accessibility. This approach helps mitigate potential harms and maximize the societal benefits of AI. There are established models that organizations can follow to initiate, mature, and expand these practices, providing clear guidance for building the necessary infrastructure for ethical AI development.

Recognizing the increasing prevalence and accessibility of generative AI, we acknowledge the need for specific guidelines addressing the unique risks associated with this technology. These guidelines complement our principles and serve as a guiding light for their practical implementation, assisting businesses in developing products and services that utilize this new technology. Our guidelines offer valuable insights for organizations navigating the risks and considerations associated with the widespread adoption of generative AI. These guidelines focus on five key areas.

Organizations must have the capability to train AI models using their own data to ensure verifiable results that balance accuracy, precision, and recall. It is crucial to communicate any uncertainties related to generative AI responses and provide means for people to validate them. This can be achieved by referencing the sources of information used by the model, explaining the rationale behind AI-generated responses, highlighting areas of uncertainty, and implementing safeguards to limit full automation of certain tasks.

Mitigating bias, toxicity, and harmful outputs is always a top priority in AI, and organizations should conduct bias, explainability, and robustness assessments to address these concerns. Safeguarding the privacy of personally identifiable information present in the training data is essential to prevent potential harm. Additionally, security assessments play a crucial role in identifying vulnerabilities that could be exploited by malicious actors, such as prompt injection attacks aimed at bypassing ChatGPT’s safety measures.

Respecting data provenance and obtaining consent for data usage are essential when collecting data for model training and evaluation. Open-source and user-provided data can be leveraged for this purpose. When autonomously delivering outputs, it is crucial to transparently indicate that the content was created by AI. This can be achieved through watermarks or in-app messaging.

While full automation may be suitable in certain cases, AI should generally play a supportive role. Generative AI serves as an effective assistant, particularly in industries like finance or healthcare where trust-building is paramount. Human involvement in decision-making, complemented by data-driven insights from AI models, fosters trust and maintains transparency. Additionally, ensuring accessibility of the model’s outputs is important, such as generating ALT text for images and making text output compatible with screen readers. Respecting content contributors, creators, and data labeller’s by providing fair wages and obtaining their consent is crucial.

The size of AI models does not always correlate with their quality. In our model development, we prioritize minimizing model size while maximizing accuracy through extensive training on large volumes of high-quality CRM data. This approach helps reduce the carbon footprint by minimizing computation requirements, leading to lower energy consumption in data centers and reduced carbon emissions.

To safely integrate generative AI in business applications and achieve desired outcomes, organizations should consider the following tactical tips:

Emphasize the use of zero-party data, which is voluntarily shared by customers, and first-party data collected directly by the company, for training generative AI tools. Maintaining strong data provenance is crucial to ensure the accuracy, originality, and trustworthiness of the models.

Avoid relying heavily on third-party data, as it can introduce challenges in guaranteeing the accuracy of the AI tool’s output. By prioritizing the use of customer-provided and directly collected data, organizations can enhance the reliability and effectiveness of generative AI tools in driving business results.

AI is only as good as the data it’s trained on. Inaccurate or biased training data can result in inaccurate or biased AI tools. Companies must review and curate datasets to ensure safety and accuracy. Generative AI should augment human capabilities, not replace them. Humans must review outputs for accuracy and bias. Companies play a critical role in responsibly adopting generative AI, ensuring accuracy, safety, honesty, empowerment, and sustainability while mitigating risks and eliminating biased outcomes.

Generative AI requires constant oversight. Companies can automate the review process by collecting metadata and developing standard mitigations for risks. Humans must also check output for accuracy, bias, and hallucinations. Companies can invest in ethical AI training for front-line engineers and managers to assess AI tools. If resources are constrained, they can prioritize testing models with the most potential to cause harm.

Listening to employees, advisors, and impacted communities is key to identifying risks and course-correcting. Companies can create pathways for employees to report concerns and form ethics advisory councils to weigh in on AI development. Open communication with community stakeholders is also important. With generative AI going mainstream, enterprises have the responsibility to use this technology ethically and mitigate potential harm. Sticking to a firm ethical framework can help navigate this period of rapid transformation.

Join us

This Week

Recommended