When the Cybersecurity Guardian Uploads State Secrets to OpenAI: The CISA ChatGPT Incident
The reality is this: the acting director of the United States' Cybersecurity and Infrastructure Security Agency uploaded sensitive government documents to ChatGPT's public platform last summer.
Let that settle for a moment.
The person responsible for defending America's federal networks and critical infrastructure from sophisticated nation-state adversaries fed "For Official Use Only" contracting documents into an AI system that shares data with OpenAI and potentially 800 million users worldwide.
Multiple automated security alerts were triggered. Senior Department of Homeland Security officials launched an internal investigation. And yet, somehow, Dr. Madhu Gottumukkala remained in his position as CISA's acting director.
From my years in US government intelligence work, I can tell you this isn't simply embarrassing. This is a systems failure that reveals fundamental problems with how we approach privileged access, AI governance, and the dangerous assumption that senior officials somehow understand operational security better than the controls designed to protect them.
The Timeline of a Predictable Failure
Let me walk you through what happened, because the pattern here is instructive.
May 2025: Gottumukkala arrives at CISA as acting director after serving as South Dakota's Chief Information Officer under then-Governor Kristi Noem. At this time, ChatGPT is blocked for most Department of Homeland Security employees due to data retention concerns.
Shortly After Arrival, Gottumukkala requests special permission to use ChatGPT from CISA's Office of the Chief Information Officer. The request is granted with what DHS described as "controls in place."
Mid-July to Early August 2025: Gottumukkala uploads at least four documents marked "For Official Use Only" to ChatGPT's public platform. These documents contained government contracting information not intended for public release.
August 2025: CISA's cybersecurity monitoring systems detect the uploads. Multiple automated alerts are triggered in the first week of August alone. These alerts are specifically designed to prevent the theft or inadvertent disclosure of government files.
Post-Detection: Senior DHS officials, including then-acting General Counsel Joseph Mazzara and Chief Information Officer Antoine McCord, launch an internal assessment. Gottumukkala meets with CISA leadership to review the uploaded materials. The results of this investigation have not been made public.
January 2026: Politico breaks the story. CISA spokesperson Marci McCarthy confirms Gottumukkala had authorization to use ChatGPT but describes the usage as "short-term and limited."
The entire sequence is a masterclass in how privileged access exceptions become security vulnerabilities.
The "For Official Use Only" Distinction
Let's address the immediate defence: these documents were not classified.
Technically accurate. Practically irrelevant.
"For Official Use Only" is a controlled unclassified information designation. It's applied to sensitive information that, while not meeting the threshold for classification, is still not intended for public disclosure. Government contracting documents fall into this category for specific reasons.
They contain information about procurement processes, vendor relationships, pricing structures, and operational details that could be exploited by adversaries. They reveal decision-making patterns, organizational priorities, and resource allocation strategies.
In the hands of a sophisticated nation-state intelligence service, this type of information becomes puzzle pieces in a larger intelligence picture. China's Ministry of State Security didn't build its comprehensive database of US government operations through individual spectacular breaches. They built it through sustained collection of exactly this kind of "unclassified but sensitive" material over the years.
From an intelligence perspective, dismissing this as "not classified" fundamentally misunderstands how modern intelligence collection actually works.
The Architecture of Privilege Abuse
Here's what strikes me most about this incident: it follows a pattern I've seen repeatedly in government security failures.
Step One: Senior Official Identifies Tool Restriction Gottumukkala arrives at CISA and discovers ChatGPT is blocked. Like many senior officials, he views this as an obstacle rather than a safeguard.
Step Two: Privilege Exception Granted Rather than using approved alternatives like DHS's internal DHSChat, which is specifically configured to prevent data leakage, he requests and receives special access to the restricted tool.
Step Three: Exception Becomes Standard Practice. With formal authorization in hand, the tool transitions from "special case" to routine use. The psychological shift is critical. The exception becomes normalized.
Step Four: Security Controls Trigger Automated monitoring systems detect the policy violation. Multiple alerts are generated. This is precisely what's supposed to happen.
Step Five: Post-Incident Rationalization Official statements emphasize that permission was granted, controls were in place, and usage was limited. The narrative focuses on authorization rather than appropriateness.
One DHS official characterized it more bluntly to Politico: "He forced CISA's hand into making them give him ChatGPT, and then he abused it."
That assessment, while informal, captures the essential dynamic. This wasn't a technical failure. It was a failure of governance, judgment, and accountability.
What ChatGPT Does With Your Data
Let me explain exactly what happened to those documents once they entered ChatGPT's public platform, because this is where intelligence tradecraft meets commercial AI reality.
When you upload information to ChatGPT's public version, you are sharing that data with OpenAI. The company's terms of service are clear about this. The data may be used to improve model responses. It becomes part of a training corpus accessible to OpenAI's systems.
Now, here's where it gets interesting from a counterintelligence perspective.
OpenAI operates globally. The company has users in virtually every country, including those with sophisticated intelligence services. When you query ChatGPT with specific prompts, the model draws on its training data to generate responses.
Could a sophisticated adversary craft prompts designed to extract information about US government contracting processes? Almost certainly. Would they be able to retrieve specific documents? Unlikely, given how large language models work. But could they identify patterns, terminology, and operational approaches that inform their broader intelligence collection efforts? Absolutely.
This is precisely why DHS developed DHSChat, its internal AI tool configured to keep all inputs within federal networks. The architecture isn't about limiting functionality. It's about ensuring that government data doesn't become training material for publicly accessible AI systems.
Gottumukkala had access to a secure alternative. He chose to use the public tool anyway.
The Polygraph Context Nobody Wants to Discuss
There's an additional layer to this story that makes the incident even more troubling.
In December 2025, Politico reported that Gottumukkala failed a counterintelligence polygraph examination in late July 2025. According to that report, he had pushed for career CISA staff to undergo these examinations, then failed his own test. Six career staffers were subsequently placed on leave, with the polygraph described as "unsanctioned."
Gottumukkala disputed this characterization in testimony to lawmakers.
The polygraph incident occurred around the same time as the ChatGPT uploads. Both incidents involve questions about judgment, operational security practices, and adherence to established protocols.
From an intelligence community perspective, this combination is significant. Counterintelligence polygraphs are designed to identify potential security risks. Uploading sensitive government documents to public AI platforms demonstrates exactly the kind of judgment lapse these examinations are meant to detect.
I'm not suggesting any connection between these incidents. But I am noting that they paint a concerning picture of decision-making at CISA's leadership level during a critical period.
The Agency Staffing Catastrophe
Here's the context that makes this incident particularly damaging: CISA is hemorrhaging institutional knowledge at an unprecedented rate.
When the Trump administration took office in January 2025, CISA had over 3,300 employees. By January 2026, that number had dropped to approximately 2,200. More than 1,000 cybersecurity professionals, with years of government service and specialized expertise, left the agency through buyouts, early retirements, and layoffs.
The agency faces proposed budget cuts of nearly 500 million dollars for the fiscal year 2026.
Now consider the signal this ChatGPT incident sends to the remaining workforce.
Career staff who followed every security protocol, who declined to use restricted tools even when inconvenient, who maintained operational security discipline, watched their acting director receive special permission to use a blocked platform and then upload sensitive documents with minimal apparent consequences.
The message is clear: senior political appointees operate under different rules than career professionals.
From my experience in government service, I can tell you this is precisely how institutional security culture collapses. When leadership demonstrates that protocols are suggestions rather than requirements, when exceptions become routine, when violations are quietly handled rather than transparently addressed, the entire security framework becomes performative theatre.
The Irony of Defending Against What You Enable
Let me state this as plainly as possible: CISA's mission is to defend US federal networks and critical infrastructure against cyber threats from sophisticated adversaries, including Russia, China, Iran, and North Korea.
These adversaries invest heavily in collecting intelligence about US government operations. They study our procurement processes, our organizational structures, and our decision-making patterns. They look for vulnerabilities to exploit, both technical and human.
And CISA's acting director handed them a gift by uploading internal documents to a platform where that information potentially becomes accessible through an AI system with 800 million users worldwide.
The irony is almost too perfect.
CISA publishes guidance on AI security risks. The agency warns organizations about data leakage through large language models. They advise federal agencies and critical infrastructure operators to implement strict controls on AI tool usage.
And yet, when CISA's own leadership wanted access to a restricted AI platform, the agency granted an exception. When that exception was abused, the response was an internal review with undisclosed findings.
This is the cybersecurity equivalent of a fire marshal who smokes in the powder magazine and then acts surprised when someone notices.
What the UK Would Have Done Differently
Let me offer an outsider's perspective from my current position in London.
When I discuss this incident with my colleagues in the UK cybersecurity sector, the reaction is consistent: this wouldn't have happened under the National Cyber Security Centre's governance framework.
Not because British officials are inherently more security-conscious than Americans. Not because the NCSC has superior technology. But because the UK has implemented structural controls that make this type of incident significantly harder to execute.
First, privileged access exceptions in the UK government require multi-level approval with documented risk assessments. A senior official requesting special access to a restricted tool would need to demonstrate why approved alternatives are insufficient and accept formal accountability for the risk.
Second, the UK's approach to AI governance in government settings emphasizes technical controls over individual discretion. When the NCSC advises agencies to restrict AI tool usage, the restrictions are enforced at the network level, not left to individual compliance.
Third, and perhaps most importantly, the UK maintains a clear separation between political leadership and operational security implementation. Technical security decisions are made by career professionals who cannot be easily overridden by political appointees seeking expedient exceptions.
The United States could learn from this model. Technical security controls should not be subject to political convenience.
The Broader AI Governance Failure
This incident isn't isolated. It's symptomatic of how governments worldwide are struggling to implement AI governance frameworks that actually work in practice.
Organizations recognize that AI tools like ChatGPT present data leakage risks. They implement policies restricting usage. Then senior officials request exceptions because the restrictions are inconvenient. IT departments grant the exceptions because refusing a senior request is politically difficult. The exceptions multiply. The policies become meaningless.
I've seen this pattern in multiple organizations. The problem isn't technical. It's organizational psychology.
When you create security policies that are regularly bypassed for senior officials, you've implemented security theatre, not security. The policies exist to demonstrate compliance, not to enforce controls.
Real security requires three elements that most organizations struggle to implement:
Technical Controls: Enforcement at the infrastructure level, not dependent on individual compliance.
Clear Accountability: Senior officials held to the same standards as junior staff, with visible consequences for violations.
Alternative Capabilities: Approved tools that provide similar functionality without the security risks.
DHS had the technical controls. They detected the uploads immediately through automated monitoring. They had the alternative capability. DHSChat was available and configured for secure AI usage.
What they lacked was clear accountability. Gottumukkala received special access, violated the terms of that access, triggered multiple security alerts, and remains in his position months later with no public consequences.
This is not how you build a security culture that actually protects sensitive information.
The Samsung Precedent Everyone Ignored
Let me reference an incident that should have prevented this CISA failure.
In 2023, Samsung discovered that several engineers had uploaded proprietary source code and meeting notes to ChatGPT while seeking assistance with technical problems. The information became part of ChatGPT's training data, potentially accessible to Samsung's competitors.
Samsung's response was immediate and comprehensive. They restricted ChatGPT access across the organization, implemented technical controls at the network level, and developed internal AI tools with appropriate data protection.
This incident was widely reported in cybersecurity circles. Government agencies worldwide used it as a case study for AI risk management. CISA itself referenced similar incidents in security guidance for federal agencies.
And yet, when CISA's acting director requested ChatGPT access, the agency granted permission.
The Samsung incident proved that even technically sophisticated organizations with professional workforces will inadvertently leak sensitive information to AI platforms if access is permitted. The solution isn't trusting individuals to exercise appropriate judgment. It's implementing technical controls that prevent the behaviour.
CISA knew this. They had the Samsung case study. They had their own security guidance. They had technical alternatives in place.
They granted the exception anyway.
What This Reveals About Government Security Culture
From my perspective as someone who spent years inside US government intelligence operations, this incident reveals several troubling patterns about federal cybersecurity culture.
Pattern One: Privilege Trumps Protocol. Senior officials routinely receive exceptions to security policies that career staff must follow without exception. This creates a two-tier security culture where the people with the most access and the highest value as intelligence targets operate under the weakest controls.
Pattern Two: Convenience Over Security When secure alternatives exist but require additional steps or provide slightly less functionality, decision-makers often choose convenience despite the risk. DHSChat was available. Gottumukkala used ChatGPT because it was easier.
Pattern Three: Hidden Accountability Security violations by senior officials are handled through internal reviews with undisclosed findings. Career staff violations result in formal disciplinary action. This double standard undermines the entire security framework.
Pattern Four: Detection Without Consequence Multiple automated alerts were triggered. The monitoring systems worked perfectly. But detection without meaningful consequences doesn't prevent future violations, it just documents them.
Pattern Five: Institutional Expertise Dismissed. CISA employs thousands of cybersecurity professionals with deep expertise in operational security. Their acting director apparently believed his judgment was superior to the controls they implemented.
These patterns don't create resilient security cultures. They create organizations where security policies are performative compliance exercises rather than operational controls.
The Real Risk: Normalization of Deviation
Here's what concerns me most about this incident, from both my intelligence background and my current work in the private sector.
The danger isn't the specific documents Gottumukkala uploaded. Four contracting files, while sensitive, don't represent catastrophic intelligence loss.
The danger is the normalization of security policy violations at senior levels.
When leadership demonstrates that policies can be circumvented through privilege, when exceptions become routine rather than extraordinary, when violations are quietly managed rather than transparently addressed, you've created an organizational culture where security is negotiable.
In intelligence operations, we call this "security culture decay." It's insidious because it doesn't manifest as a single catastrophic breach. It manifests as dozens of small compromises that accumulate until the entire security framework becomes meaningless.
Consider how this incident likely unfolded within CISA:
Career staff implement ChatGPT restrictions based on data leakage risks. Acting director requests an exception. IT security professionals advise against granting an exception. Senior leadership overrules security advice. An exception is granted. Career staff watch as their security recommendations are ignored. Morale suffers. Institutional security culture weakens.
Now multiply this pattern across dozens of security decisions over months. The cumulative effect is an organization where security policies are viewed as obstacles to circumvent rather than controls to respect.
That's the real damage from this incident. Not the documents uploaded, but the message sent about security culture at America's civilian cybersecurity agency.
What Should Have Happened
Let me describe how this situation should have been handled, based on both my government experience and current private sector work.
Step One: Need Assessment When Gottumukkala identified ChatGPT as a useful tool, the appropriate response was a formal risk assessment. What specific functionality does ChatGPT provide that approved alternatives lack? What is the business justification for accepting the additional risk?
Step Two: Alternative Evaluation DHS had DHSChat available. Before granting an exception to use public ChatGPT, CISA should have conducted a formal evaluation of whether DHSChat could meet the identified needs with appropriate modifications.
Step Three: Risk-Based Decision. If DHSChat was genuinely insufficient, the decision to grant ChatGPT access should have included documented risk acceptance by someone with appropriate authority, clear usage restrictions, and defined monitoring requirements.
Step Four: Technical Controls. Any ChatGPT access should have been implemented with technical controls preventing the upload of FOUO material. Content scanning, data loss prevention tools, and automated blocking of sensitive information are standard capabilities.
Step Five: Training and Oversight. Before receiving access, the user should have completed specific training on the limitations and appropriate usage of the platform, with regular compliance reviews.
Step Six: Violation Response When the security alerts were triggered, the response should have been immediate access suspension, comprehensive investigation, and formal determination of whether the violation warranted disciplinary action or administrative leave pending review.
None of these steps is exotic or unusual. This is basic security governance for privileged access management.
CISA didn't follow these steps because privileged access for senior officials operates under different rules than access for career staff.
That's the core problem.
The Congressional Response That Should Happen
Representative Bennie Thompson, Ranking Member of the House Homeland Security Committee, issued a statement calling Gottumukkala "at best, in over his head, if not unfit to lead."
That's political theatre. Here's what Congress should actually do:
First: Mandate a comprehensive audit of privileged access exceptions across DHS, particularly for senior political appointees. Document every case where security policy exceptions were granted, the justification provided, and the monitoring implemented.
Second: Require DHS to implement technical controls that enforce AI usage restrictions at the network level, preventing policy violations through individual access requests.
Third: Establish a clear accountability framework where security policy violations by senior officials receive the same disciplinary response as violations by career staff.
Fourth: Direct GAO to conduct an independent assessment of CISA's security culture, particularly examining how recent workforce reductions and leadership changes have affected operational security practices.
Fifth: Require quarterly reporting to Congress on security incidents involving senior officials, with a specific focus on privileged access violations and the organizational response.
These measures would address the systemic problems revealed by this incident rather than focusing solely on individual accountability for Gottumukkala.
Because here's the reality: replacing one political appointee while leaving the underlying governance framework unchanged will simply result in the same problems manifesting under different leadership.
How UK SMBs Should Respond
Let me bring this back to practical implications for small and medium businesses, particularly those in the UK market.
This incident demonstrates that even sophisticated government agencies with extensive resources and professional security staff struggle with AI governance. What does this mean for your business?
First: Don't Assume Expertise Equals Security. The assumption that technical expertise or a senior position translates to appropriate security judgment is demonstrably false. CISA's acting director had government IT leadership experience. He still uploaded sensitive documents to a public AI platform.
Implement technical controls, not trust-based policies.
Second: Your Approved Tools List Matters If your business uses AI tools, you need a formally approved list with clear usage restrictions. "Don't upload sensitive information" is not a policy. It's a hope.
Define what constitutes sensitive information for your business. Implement data loss prevention tools that enforce restrictions. Monitor usage through automated systems.
Third: Senior Access Needs Stricter Controls. Your senior leadership team has access to your most sensitive information. They're also the most likely to request exceptions to security policies. This creates maximum risk.
Privileged access for senior officials should include enhanced monitoring, not reduced oversight.
Fourth: Detection Requires Response. CISA detected the violation immediately through automated alerts. The problem wasn't detection, it was response.
If your monitoring systems detect policy violations, your incident response plan must include clear steps regardless of who triggered the alert. No exceptions based on seniority.
Fifth: Alternative Capabilities Prevent Circumvention. DHS had DHSChat available as a secure AI alternative. Organizations that restrict tools without providing functional alternatives create pressure for exceptions.
When you restrict a capability, provide an approved alternative that meets the business need securely.
The Competitive Advantage Opportunity
Here's the business case I would make to UK SMBs watching this incident unfold:
Your larger customers and partners are increasingly concerned about supply chain security. They're implementing vendor risk management programs. They're conducting security assessments. They're requiring documentation of security controls.
When they assess your AI governance practices, you want to demonstrate that you've learned from incidents like the CISA ChatGPT failure.
Competitive Differentiator One: Documented AI usage policy with a clear approved tools list, usage restrictions, and technical controls.
Competitive Differentiator Two: Evidence of consistent policy enforcement regardless of user seniority, demonstrating a mature security culture.
Competitive Differentiator Three: Automated monitoring of AI tool usage with documented incident response procedures.
Competitive Differentiator Four: Regular security training for all staff, including senior leadership, on AI data leakage risks.
Competitive Differentiator Five: A formal risk assessment process for any requests to use AI tools outside the approved list.
These aren't expensive enterprise solutions. They're governance frameworks that demonstrate security maturity. When your competitors are reactive, you can be proactive.
Your customers want partners who understand that AI security isn't just about blocking ChatGPT. It's about implementing governance frameworks that actually work when tested by human nature.
How to Sell This to Your Board
If you're trying to convince your board to invest in AI governance controls, here's the argument I would make:
"The acting director of America's cybersecurity agency just uploaded sensitive government documents to ChatGPT despite having access to secure alternatives. Multiple security alerts were triggered. A Department of Homeland Security investigation was launched. And the incident is now being reported internationally as an example of AI governance failure.
Our customers are reading these same reports. They're wondering whether their vendors have appropriate AI controls in place. They're updating their vendor risk assessment questionnaires to include questions about AI usage policies.
We can either wait until a customer audit identifies gaps in our AI governance, or we can implement controls proactively and use them as competitive differentiators.
The investment required is minimal compared to the reputational damage of being the company that leaked customer data to ChatGPT because we didn't implement appropriate controls.
CISA had the budget for sophisticated security. They still failed because they lacked governance discipline. We can learn from their mistake or we can repeat them. The choice is ours."
That's a business case focused on customer requirements, competitive positioning, and risk avoidance. It's not a technical security argument. It's a commercial argument that happens to require security implementation.
Boards respond to commercial arguments.
The Intelligence Community Perspective
Let me close with the perspective that my former colleagues in US intelligence are probably discussing privately.
When CISA's acting director uploaded those documents, he provided potential adversaries with several valuable intelligence products:
Product One: Tradecraft Observation The incident revealed that CISA's senior leadership can obtain exceptions to security policies through organizational pressure, and that such exceptions are granted even when secure alternatives exist. This is useful information for targeting individuals.
Product Two: Process Intelligence Government contracting documents contain information about procurement processes, vendor relationships, and organizational priorities. This helps adversaries understand how US government cybersecurity agencies make operational decisions.
Product Three: Technical Reconnaissance. The documents likely contained references to specific vendors, technologies, and implementation approaches. This provides insight into CISA's technical architecture and defensive capabilities.
Product Four: Cultural Assessment The incident demonstrates that CISA's security culture tolerates senior-level policy violations with limited consequences. This is valuable intelligence about organizational discipline and internal controls.
Product Five: Timing Information The incident occurred during a period of significant workforce reduction and organizational disruption at CISA. This creates additional targeting opportunities while the institutional security culture is weakened.
None of these individual products represents catastrophic intelligence loss. But in aggregate, they provide adversaries with useful context about CISA's security culture, organizational dynamics, and current vulnerabilities.
This is precisely why "For Official Use Only" designations exist. Individual documents may seem innocuous, but they contribute to larger intelligence pictures.
What Happens Next
The investigation findings remain undisclosed. Gottumukkala continues as CISA's acting director. The Trump administration's permanent nominee for CISA director, Sean Plankey, remains in confirmation limbo.
From a practical standpoint, this means America's civilian cybersecurity agency will likely continue under acting leadership with minimal public accountability for several more months at a minimum.
The workforce reductions continue. The budget cuts are proposed. The institutional expertise bleeds away.
And the lesson learned, at least from this incident, appears to be that senior officials can violate security policies without meaningful consequences as long as they obtained formal permission to use the tools they subsequently abused.
That's not a sustainable security culture.
The reality is that CISA needs permanent leadership with both the security expertise and the institutional respect to rebuild the agency's security culture. They need budget certainty to retain the remaining staff. They need clear accountability frameworks that apply equally to political appointees and career professionals.
Whether they'll get any of these things remains uncertain.
What is certain is that adversaries noticed this incident. They noted the vulnerabilities it revealed. They updated their targeting strategies accordingly.
That's how intelligence operations actually work.
Final Thoughts From an Outsider-Insider
Six years ago, I walked away from US government intelligence work. The decision was complex and personal, but part of it involved recognizing that government security culture has structural problems that individual effort cannot fix.
This CISA incident exemplifies those problems.
Technical controls implemented by professionals are undermined by political convenience. Security policies written with a sophisticated understanding of risk are bypassed through privilege exceptions. Monitoring systems that work perfectly are rendered meaningless by a lack of accountability.
From my current vantage point in London, working with UK businesses on cybersecurity challenges, I watch incidents like this with a mixture of professional concern and personal frustration.
The United States has extraordinary cybersecurity talent. CISA employs some of the most capable security professionals in the world. The technical expertise exists.
What's lacking is the political will to enforce security discipline at senior levels and the institutional framework to ensure that security policies apply regardless of hierarchy.
Until that changes, incidents like this will continue. The specific details will vary. The underlying pattern will remain constant.
Senior officials will request exceptions. Organizations will grant them. Violations will occur. Investigations will be conducted quietly. And the security culture will continue to decay.
That's the reality. It's not particularly dramatic. It's just consistently, predictably, systematically dysfunctional.
And American businesses watching this unfold should recognize that if the federal government's civilian cybersecurity agency can't maintain AI governance discipline, their own organizations likely face similar challenges.
The question is whether they'll learn from CISA's mistakes or repeat them.
From where I'm sitting, across the Atlantic, working with businesses that are genuinely trying to implement security controls that actually work, I hope they choose the former.
But experience suggests many will choose the latter.
Because security discipline is hard. Privilege exceptions are easy.
And human nature doesn't change just because the stakes are high.
| Source | Article |
|---|---|
| Politico | Trump's acting cyber chief uploads sensitive files to public ChatGPT |
| CSO Online | CISA chief uploaded sensitive government files to public ChatGPT |
| TechRepublic | Trump's Acting Cyber Chief Allegedly Leaked Data to ChatGPT |
| IBTimes UK | DHS Probes CISA Head After Sensitive Files Uploaded to ChatGPT |
| TRT World | Trump's acting cyber chief uploads sensitive files to public ChatGPT |
| The National | Cyber hygiene: Did Trump's cyber director compromise US security by using ChatGPT? |
| House Homeland Security Committee | Ranking Member Thompson Statement on CISA Acting Director Uploading Sensitive FOUO Documents into ChatGPT |
| Daily Caller | US Cyber Defense Agency Head Posted Sensitive Information Online |
| Politico (December 2025) | CISA acting director failed counterintelligence polygraph |
About Corrine Jefferson
Corrine Jefferson is a Senior Security Consultant at a multinational technology firm based in London. She previously worked in cybersecurity for the US government, where she specialized in nation-state threat analysis and cyber operations. She brings an intelligence community perspective to private sector security challenges, with particular focus on threat intelligence and risk assessment. She is an occasional strategic guest expert on The Small Business Cyber Security Guy podcast.