Standards, Frameworks, Guidelines

German Government Publishes a Practical AI-Security Guide

March 9, 2023 – German Federal Office of Information Security published “Practical AI-Security Guide 2023” which provides a comprehensive guide on the security concerns related to Artificial Intelligence (AI) systems. It discusses various types of attacks on AI systems and suggests measures to counter them.

The introduction emphasizes the importance of AI security, stating that AI systems are increasingly becoming targets of attacks. It highlights that the document aims to provide practical, but not a comprehensive, advice on securing AI systems.

The section on “General Measures for IT Security of AI-Systems” discusses the need for a comprehensive approach to AI security. It suggests that AI systems should be designed and operated securely, and that security measures should be integrated into the entire lifecycle of AI systems. It also recommends regular security assessments and updates.

The document then delves into specific types of attacks on AI systems. “Evasion Attacks” are described as attempts to trick AI systems into making incorrect predictions. The document suggests that these attacks can be mitigated by using robust models, adversarial training, and input validation.

“Information Extraction Attacks” are attacks that aim to extract sensitive information from AI systems. The document recommends reducing the scope of the model’s output values, sanitizing data, avoiding overfitting, and using differential privacy to counter these attacks.

“Poisoning and Backdoor Attacks” are attacks that manipulate the training data of AI systems. The document suggests that these attacks can be countered by using trusted sources, searching for triggers, retraining models with benign samples, network pruning, and autoencoder detection.

The document also discusses the limitations of the defence measures. It notes that while these defences can help counter attacks, they can also adversely affect other aspects of the model, such as performance and computational time. It advises balancing attack resilience and performance based on the expected risk of the overall AI system.

In conclusion, the document provides a practical guide to securing AI systems. It highlights the importance of a comprehensive approach to AI security and provides specific measures to counter various types of attacks. However, it also emphasizes the need for a balanced approach, taking into account the potential trade-offs between attack resilience and performance.

[email protected] | About me | Other articles
For 30+ years, I've been committed to protecting people, businesses, and the environment from the physical harm caused by cyber-kinetic threats, blending cybersecurity strategies and resilience and safety measures. Lately, my worries have grown due to the rapid, complex advancements in Artificial Intelligence (AI). Having observed AI's progression for two decades and penned a book on its future, I see it as a unique and escalating threat, especially when applied to military systems, disinformation, or integrated into critical infrastructure like 5G networks or smart grids. More about me, and about Defence.AI.

Related Articles

Share via
Copy link
Powered by Social Snap