ÍæÅ¼½ã½ã

Our Network

An action plan to keep organizations safe with artificial intelligence Government directives aren¡¯t enough to ensure security with AI

BrandPost By Paul Desmond
Aug 25, 20254 mins

While the U.S. AI Action Plan and EU AI Act lay out artificial intelligence requirements and advice, organizations would do well to stay ahead of the curve. Here¡¯s how.

Portrait of agitated cybersecurity team working in emergency mode with critical error message on computer screen
Credit:

The White House recently published an full of recommended policy actions aimed at making effective use of artificial intelligence in industry and government. It comes on the heels of the , a comprehensive legal framework that regulates various facets of AI within the European Union, with enforcement of most provisions slated to begin in August 2026.

From a security perspective, organizations of all stripes would do well to remember that security has always involved a shared responsibility model. Cloud providers make that abundantly clear in their agreements, but it applies across the board, for on-premises systems as well. Each vendor and end user organization must take responsibility for some aspects of security. It’s no different with AI.

So, while security and IT professionals can and should pay attention to government laws and directives, they should also be aware that whatever any government produces will, by its nature, lag the reality on the ground, often by years. Such directives are based on yesterday’s threats and technology.

It’s difficult to think of a technology that has evolved faster than AI is moving right now. New developments arise seemingly by the day, and with them, new security threats. Following are some words of advice to help you keep up.

Pay attention, question everything, put up guardrails

First, pay attention – to emerging laws like the EU AI Act and whatever may come from the U.S. AI Action Plan, but also to your users and AI technology itself. Dig deep into how your employees are actually employing AI, the challenges they’re having, and the opportunities it offers. Consider what dangers it may present if things go awry or that bad actors, whether internal or external, may try to inject. Keep up to speed with how AI is evolving. Yes, that may be a full-time job in itself, but if you stay tuned in and connected, you can pick up on the big developments.

Next, question everything. Insist on explainability with all AI applications. Only by understanding how AI works can you begin to ensure that you can root out bias, privacy violations, and other misuses of data. You also need to ensure your AI is resistant to attacks, including data poisoning, by insisting on quality data standards, and protect against unacceptable risk, such as by insisting on human judgment when warranted.

You’ll also need guardrails around your AI applications, especially as agentic AI begins to take hold. If AI systems are going to be trusted to make decisions on their own, you must treat them like any other user, subject to appropriate access controls. In short, zero trust applies to AI applications just as it does to other users.

Collaborate to keep up

If all this sounds like a lot of work, know you’re not alone. Collaborate with your peers. Join industry user groups to stay informed and learn best practices. Collaborate, too, with industry groups like the , the private sector component of the FBI’s InfraGard program. INMA is focused on educational programs, training events, and information-sharing initiatives.

While there’s no question AI presents numerous security challenges, it’s not like we haven’t seen this before. Many will recall the angst over the EU General Data Protection Regulation and concern over how difficult it would be to comply with. GDPR did force change, but organizations weathered the storm and now we’ve seen U.S. states adopt many of the same tenets. Expect the same with AI, but don’t wait for government to force your hand.

of the latest thinking about the biggest IT topics of the day at the Palo Alto Networks Perspectives page.