U.S. federal agencies are starting to use the transformative potential of generative AI (GenAI) to improve their work. GenAI can help generate policy drafts, summarize vast data sets, and automate mundane administrative tasks. These tools can make work faster and easier. However, GenAI in government comes with big challenges. Agencies face a complex landscape shaped by evolving regulations, strict governance requirements, and significant security challenges.
Security: Safeguarding sensitive data and national interests
When implementing new technology, the federal government must protect sensitive and mission-critical data. GenAI models, particularly those based on large language models (LLMs), require vast datasets to thrive. When this data includes personally identifiable information (PII), national security information, or other sensitive federal datasets, the stakes for security are exponentially higher.
To mitigate these risks, agencies must implement a “secure by design” approach. This includes embedding strong cybersecurity principles throughout the AI lifecycle – from model training to deployment and usage. Key steps in this approach include
- Enforce strict data segregation between training and inference environments.
- Implement endpoint protection and monitoring to prevent data leakage or unauthorized access.
- Design audit trails and logging to ensure traceability of AI interactions and decisions.
- Harness access controls to limit the use of GenAI tools based on user roles and data classification.
Federal Risk and Authorization Management Program (FedRAMP) is central to certifying cloud providers and AI solutions that host federal data. Only FedRAMP-authorized systems should be considered for GenAI applications. This ensures vendors meet stringent federal security requirements.
Governance: Establishing trust and accountability
Generative AI’s potential to produce new content – text, images, code – poses unique governance challenges. In the federal context, the concern isn’t just about performance; it’s about trust, accountability, and transparency.
Effective GenAI governance frameworks must address three important points:
- Model explainability: Federal agencies must be able to explain how and why an AI system made a specific recommendation or output. “Black-box” AI solutions are ill-suited for environments where transparency is a legal and ethical obligation.
- Human-in-the-loop (HITL) oversight: Decisions influenced or generated by GenAI, such as policy memos or grant recommendations, should be reviewed and approved by qualified federal personnel before being acted upon.
- Bias and fairness mitigation: Agencies must proactively identify and correct for systemic biases in training data that could skew GenAI outputs in discriminatory or inequitable ways.
Additionally, agency-level AI governance boards or steering committees can help ensure cross-departmental alignment and standardized evaluation criteria for AI deployments.
Regulations: Navigating a shifting compliance landscape
The U.S. regulatory framework for AI, while still evolving, is rapidly becoming more defined. Key documents such as Executive Order 13960 on Promoting the Use of Trustworthy AI in the Federal Government, and the Office of Management and Budget’s (OMB) AI Guidance, outline broad expectations for federal AI use – emphasizing transparency, accountability, and risk management.
More recently, the Biden Administration’s Executive Order on Safe, Secure, and Trustworthy AI (October 2023) has introduced heightened scrutiny for GenAI, particularly around safety evaluations, watermarking, and managing dual-use concerns (e.g., using GenAI for cybersecurity or offensive applications). Agencies must now:
- Conduct AI impact assessments before deploying high-risk models.
- Ensure compliance with civil rights laws, especially when AI interacts with the public.
- Coordinate with the National Institute of Standards and Technology (NIST) for risk frameworks and technical benchmarks.
These regulatory shifts highlight the need for continuous compliance monitoring and agile policy adaptation as new rules are enacted.
Getting started with GenAI in federal
Implementing generative AI in federal agencies holds significant promise, but it must be done carefully and with a deep respect for the unique demands of the public sector. By integrating robust security measures, establishing clear governance protocols, and staying aligned with a rapidly evolving regulatory environment, federal agencies can responsibly harness GenAI’s power while upholding public trust and institutional integrity.
If you’re looking to discuss AI policies and governance standards to keep your AI solutions secure, Arctic IT Government Solutions can help. Connect with us today to get the conversation started.
By Steve Schmitz, President and General Manager of Arctic IT Government Solutions