close

GSA.gov unavailable during weekend maintenance

From 7 p.m. ET on Friday, Sept. 27, until 2 p.m. ET on Saturday, Sept. 28, GSA.gov will be unavailable while GSA IT upgrades the site.

Artificial intelligence compliance plan

Our AI compliance plan is our response to comply with the Office of Management and Budget, or OMB, Memorandum M-24-10.

September 2024

General

Describe any planned or current efforts within your agency to update any existing internal AI principles, guidelines, or policy to ensure consistency with M-24-10.

On June 7, 2024, GSA released the Use of Artificial Intelligence (AI) at GSA Directive (2185.1A CIO), which established its AI policy from the mandates and guidance set forth from OMB’s Advancing Governance, Innovation, and Risk Management for Agency Use of AI [PDF] memo (M-24-10). GSA’s directive established the governing policies for the controlled access and responsible use of AI technologies and platforms. It addressed the assessment, procurement, usage, monitoring, and governance of AI systems and software within GSA’s network, in conjunction with existing security, privacy, policies, directives, ethics regulations, and laws. With 2185.1A CIO, GSA will ensure its guidelines address critical issues such as bias mitigation, transparency, accountability, and the ethical use of AI.

Prior to the release of M-24-10, GSA had in place policy specific to generative AI (Security Policy for Generative Artificial Intelligence (AI) Large Language Models (LLMs) (Number: CIO IL-23-01)). This policy was canceled by the AI Directive (2185.1A CIO), which expanded upon the controls set out in CIO IL-23-01, and ensured GSA was in full alignment with M-24-10. Additionally, miscellaneous policies including GSA’s privacy assessments, authority to operate (ATO) policies, and general IT policies have been reviewed for consistency with M-24-10.

GSA will conduct annual reviews and assessments of 2185.1A CIO to ensure the mandates set forth in M-24-10 are maintained as AI capabilities continue to evolve. The Chief Artificial Intelligence Officer (CAIO) will lead these reviews with the support of the AI Governance Board and AI Safety Team. Updates to 2185.1A CIO and other relevant policies will be rolled out as-needed and are not restricted to an annual update.

AI Governance Bodies

Identify the offices that are represented on your agency’s AI governance body.

GSA’s AI governance structure is composed of two bodies: the AI Governance Board and the AI Safety Team. The AI Governance Board is an executive-level team chaired by the Deputy Administrator and co-chaired by the CAIO. The board provides top-level oversight and strategic leadership in AI-related decisions and includes principals from across GSA:

  • Chief Information Officer (CIO)
  • Chief Information Security Officer (CISO)
  • Chief Privacy Officer (CPO)
  • Chief Technology Officer (CTO)
  • Federal Acquisition Service (FAS) Deputy Commissioner or Designee
  • Public Buildings Service (PBS) Deputy Commissioner or Designee
  • Technology Transformation Services (TTS) Director
  • Office of Small and Disadvantaged Business Utilization (OSDBU) Associate Administrator
  • Chief Acquisition Officer (CAO) or Designee
  • Chief Financial Officer (CFO)
  • Office of Strategic Communication (OSC) Associate Administrator
  • Performance Improvement Officer (PIO)
  • Chief Diversity and Inclusion Officer (CDIO)
  • Office of Civil Rights (OCR) Associate Administrator
  • Chief Human Capital Officer (CHCO)
  • General Counsel or Designee
  • Office of Customer Experience (OCE) Chief Customer Officer (CCO)
  • Evaluation Officer (EO)
  • Statistical Official (SO)

The AI Governance Board is responsible for fulfilling its duties under Executive Order 14110 [PDF] and the guidance of M-24-10, which includes monitoring cross-agency AI priorities by shaping GSA’s AI Strategic Plan and identifying the necessary resources to implement those priorities. The board also prioritizes AI capabilities across the agency, identifying gaps, duplications, and overlaps in AI efforts, and defines cost-effective AI solutions based on established criteria, policies, and processes.

Additionally, the AI Governance Board will coordinate with AI system owners to strengthen strategic planning and risk management efforts. The board is tasked with setting agency-wide AI policies in a manner that supports, but does not override, the statutory authority of existing roles. It will establish risk tolerance thresholds and manage a portfolio of AI use case risks that align with GSA’s mission.

As part of its governance responsibilities, the board will review and make decisions on all safety-impacting and rights-impacting AI use cases. It will also create and oversee working groups, such as the AI Safety Team, to address AI governance issues, set priorities, and manage AI use cases that further supports GSA’s mission. Finally, the AI Governance Board will assist the CAIO in ensuring that the agency complies with regulations and guidance under Executive Orders 13859, 13960, 14091, 14110, and M-24-10.

The AI Safety Team focuses on the implementation and monitoring of AI systems to ensure their safe and ethical use. The AI Safety Team is chaired by the CAIO and consists of representatives from the offices of each principal member of GSA’s AI Governance Board. 

The Safety Team works under the guidance of the AI Governance Board and is technically oriented, concentrating on identifying potential risks, managing AI-related security concerns, and conducting internal audits. The AI Safety Team is responsible for ensuring that AI systems are transparent, accountable, and, where known, biases are understood and accounted for, with an emphasis on compliance with GSA’s privacy and security protocols. While the Governance Board sets the policy direction and risk thresholds, the AI Safety Team ensures GSA adheres to those policies.

The AI Safety Team’s roles and responsibilities include:

  • Reviewing and evaluating proposed AI use cases, including criteria for assessing feasibility, benefits, risks, and compliance with ethical standards and legal requirements.
  • Engaging with relevant stakeholders, including internal departments, external partners, and users, to gather input, address concerns, and ensure buy-in for proposed AI use cases.
    • The CAIO assigns a Safety Team member as steward for every use case.
    • The steward is responsible for summarizing the proposal in their own words and presenting the use case to the rest of the Safety Team. 
  • Conducting thorough risk assessments to identify potential risks associated with AI use cases, such as data privacy concerns, algorithmic biases, security vulnerabilities, and unintended consequences.
  • Promoting transparency and accountability in the AI approval process by documenting decisions, rationale, and potential risks associated with approved use cases.
  • Assessing the ethical implications of proposed AI integration use cases, including considerations of fairness, transparency, accountability, and the protection of individual rights and privacy.
  • Ensuring that proposed AI integration use cases comply with relevant laws, regulations, and industry standards governing AI technologies, data protection, privacy, and security.
  • Conducting technical evaluations of proposed AI integration use cases to assess their feasibility, scalability, performance, and compatibility with existing systems and infrastructure.
  • Making informed decisions regarding the approval or rejection of AI integration use cases based on comprehensive evaluations of their feasibility, benefits, risks, ethical considerations, stakeholder input, and compliance with organizational policies and legal requirements.
  • Generating regular reports on approved AI use cases and their implementation status for internal review and external reporting purposes.

These governance bodies promote the ethical use of AI, enhance operational efficiency, and support the continued development of GSA’s AI policy framework. The Governance Board oversees the adjudication of AI into mission-critical systems - explicitly rights-impacting or safety-impacting use cases, while the AI Safety Team ensures compliance and risk management for all AI use cases. 

Describe how, if at all, your agency’s AI governance body plans to consult with external experts as appropriate and consistent with applicable law. 

GSA has and will continue to collaborate with external experts, including academic institutions, state governments, and other federal agencies to develop training programs and benchmark its updated AI policies against best practices in the field. Members of the AI Governance Board and AI Safety Team regularly engage with interagency groups and councils, including the CDO council, the CAIO council, the Chief Human Capital Officer (CHCO) council, the Office of Personnel Management (OPM), the AI Talent Task Force, and the three interagency working groups coordinated by the CAIO council on generative AI, AI-related acquisitions, and AI risk management. 

External consultation will continue to play a critical role in both governance and safety efforts. The AI Governance Board plans to consult with external experts as appropriate and consistent with applicable law. This consultation may include input from Federally Funded Research and Development Centers, academic institutions, think tanks, and industry experts to stay informed about AI’s latest trends, risks, and best practices. Civil society organizations, labor unions, and similar groups may also be consulted to ensure that AI systems meet public interest standards and consider workforce impacts.

AI Use Case Inventories

Describe your agency’s process for soliciting and collecting AI use cases across all sub- agencies, components, or bureaus for the inventory. In particular, address how your agency plans to ensure your inventory is comprehensive, complete, and encompasses updates to existing use cases.

GSA’s process for soliciting and collecting AI use cases across the organization is led by the CAIO. The CAIO is responsible for ensuring that the AI use case inventory is comprehensive and up to date, with the support of the AI Safety Team, various program offices, and the OCIO. 

The process begins with the issuance of an annual request to all offices, requiring them to submit information on current and planned AI systems. Each office must identify all AI applications, regardless of the size or scope of the system. The CAIO provides and maintains standardized guidelines and templates for reporting in compliance with OMB standards. The centralized submission process is used to collect and manage AI use cases, allowing offices to submit their entries in a consistent and structured manner. All offices participate in this process and regularly update their entries to reflect any changes, such as new AI capabilities, updates to existing systems, or the decommissioning of older technologies.

The CAIO oversees periodic reviews and audits to maintain the inventory’s accuracy and completeness. The CAIO also coordinates within the OCIO organization so that existing processes like ATOs, Federal Information Technology Acquisition Reform Act reviews, and new software requests can identify AI instances across the enterprise. This process ensures that GSA’s AI inventory remains accurate and fully reflects the agency’s ongoing use of AI technology. This approach, led by the CAIO, provides a comprehensive and complete inventory of AI use cases that support the agency’s governance and decision-making processes regarding AI deployment.

Reporting on AI Use Cases Not Subject to Inventory

Describe your agency’s process for soliciting and collecting AI use cases that meet the criteria for exclusion from being individually inventoried, as required by Section 3(a)(v) of M-24-10.

GSA has not identified any AI use cases that are not subject to inventory.


Next part >

Removing Barriers to the Responsible Use of AI

Describe any barriers to the responsible use of AI that your agency has identified, as well as any steps your agency has taken (or plans to take) to mitigate or remove these identified barriers.

GSA has identified several potential barriers to the responsible use of AI, including the procurement of AI solutions, access to high-quality data products with scalable infrastructure, and AI models and libraries. GSA has established an enterprise data platform (Enterprise Data Solution or EDS), which includes an enterprise data catalog, scalable compute infrastructure, analytical tooling, and AI/ML systems and services. This platform allows for programs to store, curate, and productize their data holdings for analytical purposes, as well as disseminate the information products generated via hosted services. EDS also allows for sandbox capabilities where AI tools can be safely tested and where rapid prototyping may occur.

To support federal procurement of generative AI, GSA published the Generative AI and Specialized Computing Infrastructure Acquisition Resource Guide in support of EO 14110, which assists the federal acquisition workforce of civilian agencies in navigating the complexities of acquiring generative AI technologies in collaboration with relevant agency officials from other domains.

GSA supports several pilot projects that assess the capabilities and viability of AI products for specific use cases, including the ease with which AI infrastructure and products may be set up, and how AI tools and services can be leveraged to improve mission outcomes for agencies. These pilot projects are designed to help GSA better understand, manage, and provide guidance for internal and other agency use regarding AI technologies for future implementation.

GSA has leveraged sandboxes to test AI capabilities of AI chatbots, IT security enhancements, custom application development, and general productivity assessments, allowing staff to evaluate AI’s potential while ensuring security and compliance.

To support the availability of AI tooling and infrastructure, the FedRAMP program has established a framework for prioritizing emerging technologies (ETs) for FedRAMP authorization, which covers technologies listed in the Office of Science and Technology Policy’s Critical and Emerging Technologies. This framework enables routine and consistent prioritization of the most critical cloud-relevant ETs needed for use by federal agencies. This prioritization controls how FedRAMP prioritizes its own work and review processes, and will not address how sponsoring agencies manage their own internal priorities.

Identify whether your agency has developed (or is in the process of developing) internal guidance for the use of generative AI.

GSA has developed internal guidance for the use of generative AI and has made it available to employees on an internal website. This guidance includes the safeguards and oversight mechanisms necessary for responsible use without posing undue risk. The AI Governance Board and AI Safety Team provide oversight by reviewing and dispositioning AI use cases, ensuring compliance with ethical standards, data privacy, and security protocols. The directive also requires that generative AI tools be used under controlled conditions set forth by the guidance and standards established in M-24-10 and enforced and overseen by the AI Safety Team. GSA is evaluating continuous monitoring and evaluation processes and tools for generative AI.

AI Talent

Describe any planned or in-progress initiatives from your agency to increase AI talent. In particular, reference any hiring authorities that your agency is leveraging, describe any AI focused teams that your agency is establishing or expanding, and identify the skillsets or skill levels that your agency is looking to attract. If your agency has designated an AI Talent Lead, identify which office they are assigned to.

GSA has several initiatives in progress to increase AI talent within the agency. Through the AI Talent Surge, GSA is actively recruiting and hiring AI professionals by leveraging hiring authorities such as the Direct Hire Authority (DHA) and the Pathways Programs. GSA is expanding AI-focused teams within the Technology Transformation Services (TTS) and OCIO, focusing on roles requiring machine learning, data science, AI ethics, and cybersecurity expertise. Additionally, GSA is seeking to attract talent with advanced skill sets in AI development, algorithmic fairness, and AI system integration. The designated AI Talent Lead is assigned to the Office of Human Resources Management (OHRM) to coordinate these efforts and ensure alignment with broader agency goals.

GSA is utilizing DHA when appropriate to fill positions in the following approved occupations and job series: 1560 Data Scientist, 1515 Operations Research Analyst, 2210 IT Specialist (Artificial Intelligence), 1550 Computer Scientist (Artificial Intelligence), 0854 Computer Engineer (Artificial Intelligence), and 0343 Management and Program Analyst, focusing on AI-related system design and machine learning development.

If applicable, describe your agency’s plans to provide any resources or training to develop AI talent internally and increase AI training opportunities for Federal employees.

GSA has made AI-related training available through online learning platforms to develop AI talent internally. The agency supports the AI Community of Excellence, which serves as a collaborative space for sharing knowledge and best practices, and is leading the AI Talent Surge effort to attract and retain skilled professionals to advance AI capabilities across the agency.

GSA AI policy (2185.1A CIO) allows for employees to gain hands-on experience with both public and internal AI tools in controlled environments. Employees are able to use public AI tools for non-sensitive use cases, including professional development and training purposes. These applications are relegated for individual uses, with the specific goal of gaining familiarity with market offerings, and are most closely aligned with professional training activities.

GSA is committed to developing AI talent internally and increasing AI training opportunities for federal employees. Role-based AI training tracks are accessible through online learning platforms, providing employees at various levels the opportunity to gain relevant skills. These tracks include foundational courses for employees, intermediate training for technical roles, and advanced courses for AI practitioners focusing on development, deployment, and ethical considerations. Additionally, the AI Community of Practice (AI CoP) fosters knowledge sharing and mentorship, including offering a 3-track governmentwide training series (focused on acquisitions, leadership and policy, and technical), while the AI Talent Surge effort ensures ongoing development of AI expertise across the agency.

AI Sharing and Collaboration

Describe your agency’s process for ensuring that custom-developed AI code—including models and model weights—for AI applications in active use is shared consistent with Section 4(d) of M-24-10.

All custom-developed AI code, including models and model weights for AI applications in active use, is shared in compliance with Section 4(d) of M-24-10 by adhering to established processes for transparency and open access. This includes reviewing AI applications through the AI Safety Team, and other relevant offices to ensure that code and models meet data security, privacy, and ethical standards before being shared. This content is disseminated via sharing platforms such as open.gsa.gov and data.gov.

Elaborate on your agency’s efforts to encourage or incentivize the sharing of code, models, and data with the public. Include a description of the relevant offices that are responsible for coordinating this work.

GSA fosters a culture of collaboration through its TTS and OCIO to encourage the sharing of code, models, and data with the public. These offices coordinate efforts to release code under open-source licenses, allowing for reuse and community contributions. GSA incentivizes public sharing by integrating it into performance evaluations for relevant teams and promoting the benefits of transparency and innovation through collaboration.

GSA has promoted AI sharing and collaboration through various public-private partnerships such as hackathons and symposiums. In support of the President’s Management Agenda Workforce Priority Strategy Goal 3.1, GSA hosted the Federal AI Hackathon to foster collaboration and problem-solving across federal agencies and leading AI commercial partners, focusing on real-world challenges that could be addressed using AI. The event served as a platform for sharing best practices, driving innovation, and developing AI-powered solutions applicable to various government functions. Participants utilized open-source tools and commercial products, shared insights, and contributed to the broader AI community. The hackathon emphasized GSA’s commitment to responsible AI development by prioritizing ethical AI, transparency, and risk mitigation. Outcomes from the event, including code, models, and lessons learned, were shared across agencies.

Harmonization of Artificial Intelligence Requirements

Explain any steps your agency has taken to document and share best practices regarding AI governance, innovation, or risk management. Identify how these resources are shared and maintained across the agency.

GSA has taken several steps to document and share best practices regarding AI governance, innovation, and risk management. The agency’s AI Governance Board plays a central role in this effort by establishing guidelines, reviewing AI use cases, and ensuring adherence to ethical and responsible AI principles. The CAIO, CIO, Administrator, and other leading GSA officials disseminate best practices in various forums, from internal town halls to speaking engagements with federal agencies and partners. The CAIO has documented these practices in internal guidance materials, reports, and policy documents, which are made accessible to employees through GSA’s intranet and collaboration platforms.

To foster continuous learning, GSA leads the AI CoP, which serves as a hub for sharing knowledge, best practices, and lessons learned across the agency. This community hosts workshops and training sessions and maintains a knowledge repository that includes AI governance, innovation, and risk management resources. These resources are regularly updated to reflect evolving standards and are shared across teams to ensure all employees have access to the latest best practices for responsible AI use.


< Previous part | Next part >

Determining Which Artificial Intelligence Is Presumed to Be Safety-Impacting or Rights Impacting

Explain the process by which your agency determines which AI use cases are rights- impacting or safety-impacting. In particular, describe how your agency is reviewing or planning to review each current and planned use of AI to assess whether it matches the definition of safety-impacting AI or rights-impacting AI, as defined in Section 6 of M-24-10. Identify whether your agency has created additional criteria for when an AI use is safety-impacting or rights-impacting and describe such supplementary criteria.

GSA has adopted the definitions and standards of rights-impacting and safety-impacting set out by M-24-10. GSA established a review process that determines which AI use cases are rights-impacting or safety-impacting by following a structured process outlined in the AI governance framework (2185.1A CIO). The AI Safety Team reviews each AI use case to assess potential impacts on safety or individual rights, as defined in Section 6 of M-24-10. The Safety Team then provides an adjudication recommendation to the AI Governance Board for final review and sign-off. This review evaluates whether an AI application involves decision-making affecting public health, safety, privacy, or civil liberties. This review process includes analyzing the data being used, the intended outcomes, and any potential for bias or harm. For safety-impacting or rights-impacting AI, GSA has not yet developed additional criteria beyond those specified in M-24-10 but plans to reassess regularly as AI use and its capability evolves.

If your agency has developed its own distinct criteria to guide a decision to waive one or more of the minimum risk management practices for a particular use case, describe the criteria.

GSA has not developed a distinct criteria in guiding the decision to waive minimum risk management practices. Describe your agency’s process for issuing, denying, revoking, tracking, and certifying waivers for one or more of the minimum risk management practices.

Describe your agency’s process for issuing, denying, revoking, tracking, and certifying waivers for one or more of the minimum risk management practices.

For the issuance of waivers, the CAIO, in coordination with other relevant officials, may waive one or more of the required minimum practices for rights-impacting or safety-impacting use cases involving a specific AI application or component, following a written determination. The waiver process involves submitting a formal request to the AI Governance Board, which includes a risk assessment and review of the specific system and context. The waiver will be considered for approval should the assessment find that meeting the requirement would increase overall risks to safety or rights or impose an unacceptable barrier to critical agency operations. If the board and CAIO approves, waivers will be reported to OMB within 30 days of issuance, tracked, and routinely reviewed to ensure compliance with agency standards. GSA maintains a certification process to confirm that AI systems granted waivers continue to meet risk management requirements, and the CAIO may revoke waivers if any noncompliance or unforeseen risks emerge. GSA has not issued a waiver at the time of this compliance plan’s publication.

Implementation of Risk Management Practices and Termination of Non-Compliant AI

Elaborate on the controls your agency has put in place to prevent non-compliant safety impacting or rights-impacting AI from being deployed to the public. Describe your agency’s intended process to terminate, and effectuate that termination of, any non-compliant AI.

GSA has implemented several controls to prevent non-compliant, safety-impacting, or rights-impacting AI from being deployed to the public. Safeguards include the review of AI use cases by the AI Governance Board, confirming all AI systems meet ethical, legal, and technical standards before deployment. The board evaluates risks related to public safety, privacy, civil liberties, and potential biases, with specific attention to AI systems that may have significant impacts on rights or safety.

To prevent non-compliant AI from being deployed, GSA has established continuous monitoring protocols that track AI system interactions at the network level. GSA is developing a strategy to increase its capacity to monitor AI system behaviors and performance. Automated alerts and reporting systems are in place to detect deviations from compliance standards, triggering an immediate review by relevant oversight bodies.

If an AI system is found to be non-compliant after deployment, GSA has developed a defined process for termination. The AI Governance Board, in collaboration with the OCIO, will issue a termination order. The termination process involves revoking system access, ceasing operations, and ensuring that data processed by the AI system is secured, maintained, or destroyed as required. An incident response team coordinates the shutdown and conducts a post-termination review to assess the impact and identify corrective actions. Additionally, communications will be made regarding use-case terminations, the reasons for the action, and any steps being taken to prevent future issues.

Minimum Risk Management Practices

Identify how your agency plans to document and validate implementation of the minimum risk management practices. In addition, discuss how your agency assigns responsibility for the implementation and oversight of these requirements.

GSA policy mandates that AI use cases document and validate the implementation of minimum risk management practices as defined in M-24-10 for AI systems through a comprehensive framework managed by the AI Governance Board. This process includes detailed documentation at each stage of the AI lifecycle, from development and testing to deployment and monitoring. AI use cases are required to maintain thorough records of risk assessments, compliance checklists, data usage audits, real-world testing, and ethical impact evaluations. These documents will be stored in a centralized repository that is accessible to the AI Safety Team and AI Governance Board.

GSA will conduct periodic audits and performance reviews of AI systems to validate that the minimum risk management practices are being followed. These audits assess compliance with federal guidelines, including data privacy, bias mitigation, and ethical considerations. Additionally, GSA employs automated monitoring tools to track ongoing compliance, and discrepancies trigger immediate reviews by oversight teams.

Responsibility for implementing and overseeing these risk management practices is clearly assigned to multiple levels within the organization. The AI Governance Board provides strategic oversight and policy guidance, and supports the AI Safety Team in ensuring all AI initiatives comply with legal and ethical standards. The OCIO is responsible for the technical implementation and monitoring of AI systems. Individual project managers and teams are tasked with day-to-day compliance, including adhering to risk management protocols, conducting regular risk assessments, and reporting any issues to the AI Governance Board.


< Previous part