Marin Ivezic

Marin Ivezic Over the past two decades, I have closely observed advancements in AI and have authored a book on the future with AI. This in-depth exposure has resulted in my recognition of AI as a potentially unparalleled threat. Since the early 2000s, I have been deeply involved in AI security research. I recognized early the dangers posed by AI-enabled disinformation and have been discussing these risks with various governments since 2010. Among the myriad risks associated with AI, I am particularly concerned about its application in military contexts, which is why I started this blog. For further info on my views on AI risks and the reasons behind Defence.AI, see Defence.AI – Securing the Future. More about me.

Luka Ivezic

Luka Ivezic Luka Ivezic is a guest contributor to Defence.AI. He is the Lead Cybersecurity Consultant for Europe at the Information Security Forum (ISF), a leading global, independent, and not-for-profit organisation dedicated to cybersecurity and risk management. Before joining ISF, Luka served as a cybersecurity consultant and manager at PwC and Deloitte. His journey in the field began as an independent researcher focused on cyber and geopolitical implications of emerging technologies such as AI, IoT, 5G. He co-authored with Marin the book “The Future of Leadership in the Age of AI”. Luka holds a Master’s degree from King’s College London’s Department of War Studies, where he specialized in the disinformation risks posed by AI.

    August 24, 2024

    Addressing the Full Stack of AI Concerns: Responsible AI, Trustworthy AI, Secure AI and Safe AI Explained

    As AI continues to evolve and integrate deeper into societal frameworks, the strategies for its governance, alignment, and security must also advance, ensuring that AI enhances human capabilities without undermining human values. This requires a vigilant, adaptive approach that is responsive to new challenges and opportunities, aiming for an AI future that is as secure as it is progressive.
    January 1, 2024

    Marin’s Statement on AI Risk

    The rapid development of AI brings both extraordinary potential and unprecedented risks. AI systems are increasingly demonstrating emergent behaviors, and in some cases, are even capable of self-improvement. This advancement, while remarkable, raises critical questions about our ability to control and understand these systems fully. In this article I aim to present my own statement on AI risk, drawing inspiration from the Statement on AI Risk from the Center for AI Safety, a statement endorsed by leading AI scientists and other notable AI figures. I will then try to explain it. I aim to dissect the reality of AI risks…

    Our AI Security Articles