Edit

CISA investigates internal ChatGPT data exposure involving sensitive DHS documents

CISA investigates internal ChatGPT data exposure involving sensitive DHS documents

The US Cybersecurity and Infrastructure Security Agency has launched an internal review following the upload of sensitive contracting documents to a public version of ChatGPT by its acting director, according to officials familiar with the matter. The incident, which occurred last summer, triggered multiple automated security alerts within the Department of Homeland Security and has renewed scrutiny around the use of artificial intelligence tools inside federal agencies.

The documents uploaded were not classified, but officials said they were marked “for official use only,” a designation applied to materials considered sensitive and not intended for public disclosure. Such files often include operational or procurement details that require additional safeguards when handled within government systems. Automated cybersecurity sensors at CISA reportedly flagged the activity in early August, with several alerts generated during the first week of the month.

Following the alerts, DHS initiated an internal assessment to determine whether the uploads posed any risk to government operations or security. Officials said the review was aimed at understanding how the data was handled, whether existing safeguards were followed, and if any corrective measures were necessary. As of now, the department has not publicly disclosed the conclusions of that review.

CISA officials confirmed that Acting Director Dr. Madhu Gottumukkala had received a temporary authorization to use ChatGPT under specific conditions. According to the agency, he last accessed the tool in mid-July 2025 as part of a limited exception process overseen by the department’s technology leadership. Access to ChatGPT remains blocked by default across CISA systems, with exceptions granted only on a case-by-case basis and subject to internal controls.

A spokesperson for the agency said the authorized use was short-term and tightly scoped, adding that DHS continues to evaluate emerging technologies while balancing innovation with security requirements. The department has emphasized that artificial intelligence tools are being explored to support modernization efforts, but only within clearly defined guardrails designed to protect sensitive government information.

The incident has also raised broader concerns about how data shared with public AI platforms may be stored or reused. When information is entered into publicly available AI systems, there is a possibility that it could be retained or used to train future responses, underscoring the importance of strict internal guidance for federal employees.

As federal agencies increasingly experiment with AI-driven tools, the situation highlights the challenges of integrating new technologies into highly regulated environments. DHS officials said the episode serves as a reminder of the need for consistent oversight, clear policies, and ongoing training to ensure that innovation does not come at the expense of data security.

What is your response?

joyful Joyful 0%
cool Cool 0%
thrilled Thrilled 0%
upset Upset 0%
unhappy Unhappy 0%
AD
AD
AD
AD