Arabic Arabic Chinese (Simplified) Chinese (Simplified) Dutch Dutch English English French French German German Italian Italian Portuguese Portuguese Russian Russian Spanish Spanish
| (844) 627-8267

Hstoday GAO: Fully Implementing Key Practices Could Help DHS Ensure Responsible Use for Cybersecurity | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #ransomware | #hacking | #aihp

In the rapidly advancing landscape of artificial intelligence (AI), the responsible use of this technology is paramount, especially in critical sectors like cybersecurity. The Department of Homeland Security (DHS) has been at the forefront of incorporating AI into its cybersecurity efforts. However, a recent report by the Government Accountability Office (GAO) scrutinizes the accuracy of DHS’s AI inventory for cybersecurity and evaluates the adoption of key practices outlined in GAO’s AI Accountability Framework.

Executive Order No. 13960 mandates federal agencies, including DHS, to maintain an inventory of AI use cases to enhance transparency and public understanding of AI applications. DHS has established such an inventory, publicly available on its website. However, GAO found discrepancies in the accuracy of the AI inventory, specifically in the realm of cybersecurity. Out of the two AI cybersecurity use cases identified, officials revealed that one had been inaccurately characterized as AI. While DHS has a process to review use cases before inclusion in the AI inventory, the agency does not currently verify whether uses are correctly classified as AI. GAO stresses the importance of expanding this process to ensure accurate reporting and foster transparency.

To assess DHS’s accountability in AI practices, GAO employed its AI Accountability Framework, which outlines key practices for the responsible use of AI. The evaluation focused on the Automated Personally Identifiable Information (PII) Detection cybersecurity use case. GAO identified 11 key practices from the Framework and assessed their implementation by DHS (see figure). The findings indicate that while DHS fully implemented four key practices, five others were implemented to varying degrees, and two were not implemented at all. Notably, practices related to documenting data sources and origins and assessing data reliability were omitted. GAO emphasizes that addressing these aspects is crucial for ensuring the accuracy and reliability of AI models.

Status of the Department of Homeland Security’s Implementation of Selected Key Practices to Manage and Oversee Artificial Intelligence for Cybersecurity

GAO Graphic

The report underscores the significance of fully implementing the key practices outlined in the AI Accountability Framework. Doing so, GAO argues, will enable DHS to ensure accountable and responsible use of AI in its cybersecurity initiatives. As AI continues to play an increasingly integral role in national security and critical infrastructure, adhering to best practices and fostering transparency is essential for building public trust and mitigating potential risks associated with the technology.

Executive Order No. 14110, issued in October 2023, reinforces the need for responsible AI use, recognizing its potential to address challenges while cautioning against irresponsible use that could pose risks to national security. GAO’s report aligns with these principles, urging DHS to strengthen its processes and practices for the effective and ethical deployment of AI in the realm of cybersecurity.

Read the full GAO report here.

Click Here For The Original Source.