
California recently released GenAI Guidelines for Public Sector Procurement, Uses and Training, as well as a GenAI Risk Assessment.
What do you need to know?
The guidelines and risk assessment come on the heels of Gov. Gavin Newsom’s AI Executive Order and California GenAI Risk Report.
Key points:
- Generative Artificial Intelligence (GenAI) is defined as: Pretrained AI models that can generate images, videos, audio, text and derived synthetic content.
- For Incidental GenAI purposes all state entities must: (1) Assign a member of the executive team the responsibility of continuous GenAI monitoring and evaluation; (2) Attend mandatory Executive and Procurement Team GenAI trainings and (3) Review annual employee training and policy to ensure staff understand and acknowledge the acceptable use of GenAI tools
- For Intentional AI procurement, all state agencies ALSO must: (4) identify a business need (before the procurement) and understand the implications of using GenAI to solve that problem statement; (5) Create a culture of engagement and open communication with state employee end users; (6) Assess the risks and potential impacts of deploying the GenAI under consideration; (7) invest time and resources (before procurement) to prepare data inputs and test models adequately; (8) Establish a GenAI-focused team responsible for continuously evaluating the potential use of GenAI and its implications for operations and program administration.
Risk Assessment:
- Deployment of GenAI technologies must be evaluated through a risk assessment based on the National Institute of Standards and Technology (NIST) AI Risk Management Framework, as well as relevant portions of the (State Administration Manual) SAM and State Information Management Manual (SIMM)
For low risk GenAI:
- Describe the project use case, problem and impact of outcome
- Were there other options considered?
- Will the GenAI system be shared or procured with any other state entity or third-party organization?
- Has a Privacy Threshold Assessment (PTA) and Privacy Impact Assessments (PIA) (SIMM 5310 – C) been completed?
For Moderate to high risks systems, also:
- What type of model(s) and/or network(s) will be used in the GenAI system?
- What mechanism will the GenAI system use to notify a user that they are interacting with a GenAI system rather than a human?
- Does the output of the system make decisions that are legal or similarly significant?
Additional general questions:
- What are the data inputs?
- Who will be the GenAI team responsible?
- How does using the GenAI tool build trust with the end user?
- How will system owners identify and mitigate hallucinations/accuracy?