3 min read

Identifying and Managing AI Risks in Your Environment

Identifying and Managing AI Risks in Your Environment

AI seems to be the next big thing in computing and technology, but few people understand what AI tools like ChatGPT, CoPilot, and Gemini really are, what risks they pose to an organization, and how to protect your organization from those risks.

Common Misconceptions About AI

People envision AI as a real-life version of Jarvis from the Iron Man and Avengers movies, expecting Tony Stark to say virtually anything and for Jarvis to step right in with the perfect answer, action, activity, or analysis of the situation.

jarvis

AI is anticipated to reduce the burden of repetitive or mundane tasks and solve complex problems. However, without the ability to recognize errors, inaccurate or incorrect information provided by an AI may lead to disastrous outcomes for an organization.

Real-World AI Risk

A recent legal case highlighted the risks associated with AI. An attorney used a Generative AI model to craft a legal brief submitted to Federal Court.

During the preparation of the brief, the attorney asked the AI model to prepare the brief, outline the stance, and validate the legal citations, cases, and briefs used in the filing. The AI model produced the documents and ‘verified’ the cases were listed in reputable databases like LexisNexis and Westlaw.

Unfortunately, none of this was actually independently verified, and the AI tool completely fabricated the citations, references, and rulings listed in the brief.

This happens because people fail to understand exactly what Generative AI tools like ChatGPT, CoPilot, Gemini, and other similar tools are.

Unlike the fictional Jarvis, these tools use large language models to predict the most common or likely word in a sentence. That word doesn’t have to be accurate, make sense, or provide any analysis. It’s simply the most frequent word pairing found in the data used to create their Large Language Model (LLM).

Additional Risks Associated with AI Use

This is just one of the risks associated with the use of AI. Other risks include:

  • Data Poisoning: Adversaries could intentionally corrupt the data used to train AI models.
  • False Analysis: AI could generate incorrect analyses or summaries.
  • Incomplete Summarizations: AI may fail to comprehensively summarize complex data.

Getting ahead of these risks requires immediate action through the development of overall AI use governance, monitoring, and user training.

NIST AI Risk Management Framework

In January, NIST released NIST AI 100-1 RMF (Risk Management Framework). This document is designed to help organizations identify the risks of AI within their environment, how to monitor AI usage within their environment, and how to reduce the overall likelihood and impact of inappropriate use of AI.

NIST has identified the characteristics of a trustworthy AI:

  • Valid and Reliable
  • Safe
  • Secure and Resilient
  • Accountable and Transparent
  • Explainable and Interpretable
  • Privacy-Enhanced
  • Fair, with Harmful Bias managed

These characteristics should be verified and supported by any organization looking to adopt Generative AI use within their organization.

Screenshot_20-5-2024_11324_nvlpubs.nist.gov

Figure 1 - Characteristics of trustworthy AI systems

NIST AI RMF Core Functions

The NIST AI RMF (Risk Management Framework) has four core Functions to help organizations manage their AI Risks. These 4 functions – Govern, Map, Measure, and Manage – and the categories associated with these functions provide a roadmap to any organization wanting to embrace the opportunities provided by AI use.

Screenshot_20-5-2024_11350_nvlpubs.nist.gov

Figure 2 - NIST AI RMF

Govern

The first place to start is with the Govern function. This function sets the overall tone for AI use within an organization and discusses training requirements, accountability, and documentation of these programs.

Map

The next step is with the Map function. This establishes the context to frame risks related to an AI system.

Mapping AI use includes understanding the overall intended use of AI, relevant AI use goals, business context, categorization, capabilities, risks, and impacts. Once this has been quantified, the final two functions can be appropriately engaged.

Manage and Measure

These two sections – Manage and Measure – indicate the actions required to address the findings of the two first functions.

Measure uses qualitive, quantitative, or mixed-method tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk. It uses AI risks identified in the Map function and informs the Manage function requirements.

Manage entails allocating risk resources to the mapped and measured risks on a regular basis and as defined by the Govern function.

This process uses contextual information gathered from previous functions to decrease the likelihood of negative impacts from improper use of AI. This final function also establishes the resources needed to implement the AI risk management program established by the organization.

AI Risk Management Implementation

AI management is still in its infancy, and organizations are scrambling to understand how to manage these risks properly.

The most common management methods involve monitoring the network for signs of unauthorized AI tool usage, monitoring audit logs of corporate AI tool subscriptions, or developing full AI sandboxes strictly regulating both the input and output of the AI tools.

As the requirements for AI management become more apparent, more tools and audit capabilities will be developed for organizations looking to govern their overall AI usage.

Conclusion

AI has the potential to transform the way we work and conduct business radically, and the benefits are many, but as with any technology, it is imperative that the full measure of risks be completely understood before opening the floodgates to misuse – either intentional or unintentional.

And, no, AI was not used to create this blog (but it may have helped proofread some sections) ;)

PacketWatch is ready to help organizations develop their own AI implementation strategies through the development of an AI Policy, reviewing of existing policies against the NIST AI RMF, and management and monitoring of existing controls to ensure they remain effective in the changing AI landscape. Contact PacketWatch today.


Todd Welfelt has an Information Technology career spanning more than 25 years. 

Todd has turned his extensive experience with hands-on management and maintenance of computer systems into practical assessment and implementation of security tools to meet the needs of compliance frameworks, as well as provide real-world risk reduction.