Blog

    Agentic AI in Procurement, Part 4: Building Security Right into Your Automated Business Processes

    Agentic AI in Procurement, Part 4: Building Security Right into Your Automated Business Processes

    When I talk to customers about the next phases of AI, I usually hear some variations of the following questions: “What are you going to do with my data?” “Are you mixing my data with someone else’s data?” “How do we track output?” “How do we make sure customers are getting good answers?”

    As someone who has been doing cybersecurity for more years than I care to admit, I can say with confidence that these questions raise concerns customers have always had. A few years ago, the same questions were asked about Large Language Models (LLMs). And further back, they could have applied to Cloud Computing. The difference this time is the speed and scale of potential problems. Agentic AI, by definition, works quickly and improvisationally, which is wonderful if it is creating new efficiencies for your business and a disaster if it is making bad decisions due to a programming error or because it has been compromised by a bad actor.

    As the procurement industry moves closer to practical applications for Agentic AI, it must address practical issues that arise. A recent survey from Cyber Security Tribe found that 59% of CISOs considered their organization’s use of Agentic AI “a work in progress,” which is another way of saying “We know that this technology is a priority, but we haven’t decided what to do yet.”

    Since the announcement of JAI (pronounced “Jay”), our agentic offering, we have been working with companies to configure solutions that make sense for their business. This blog series is an extension of those conversations. With my colleague having covered the necessity of Agentic AI, what that necessity means across different industries, and how you can start preparing your data, I would like to offer a guide to building security into your autonomous efforts from the start, so you are not caught off-guard by a vulnerability or playing security catch-up right before launch.

    Key Considerations for Secure Agents in Procurement

    The security concerns about Agentic AI are real and cannot be overstated. Among the risks that organizations face is “memory poisoning,” where malicious data gets injected into an agent’s memory to manipulate its behavior or reveal sensitive information. Also concerning is the speed of such incursions. A compromised agent with broad system access can execute multi-stage attacks at machine speed, far faster than cybersecurity teams can respond. In extreme cases, poorly contained agents can create or leverage tools that resist standard security controls.

    The good news is that when it comes to procurement, Agentic AI has good product fit. Training an agent to be a sourcing professional or a contract specialist depends on best practices and rules. Success requires being clear and detailed and always ensuring that critical decisions aren’t made without human oversight.

    Consideration 1: Define Scope and Requirements

    As with deciding which business processes to automate (see part 3), you need to state what your Agentic AI can do (and what it cannot do). Is the agent for vendor evaluation, spend analysis, or contract management? Be specific. Map data flows to determine what sensitive information the agent will access (pricing, vendor data, financial information).

    Consideration 2: Implement Strong Authentication and Access Controls

    As with all sensitive materials, organizations typically use multi-factor authentication for all users interacting with the AI agent. Role-based access is always a good idea, along with time-based and context-based access. Also ensure there are audit trails for all agent actions and decisions, so if there is a mistake or you need to document a transaction, you always have hard evidence.

    Consideration 3: Implement Data Masking for Sensitive Information

    Along with multi-factor authentication, you should create clear boundaries between the AI agent and critical data. Defaulting to less access and less privilege for your AI ensures minimum exposure. Tokenizing payment and vendor banking information is also recommended.

    Consideration 4: Establish Guardrails to Ensure AI Model Security

    Setting spending limits is essential. You should also build explanation requirements for agent recommendations. Like any team member, Agentic AI must be able to justify certain actions, especially in terms of transaction thresholds and complex decisions.

    Consideration 5: Harden Defenses Inside and Around Your AI Agents

    This is a critical consideration you cannot skip over or ignore. To avoid model drift or unexpected behaviors, organizations work with their security teams or solution providers to validate training data sources and protect against data poisoning. Securing API connections to ERP, financial systems, and vendor databases is essential, as is creating appropriate network segmentation for AI agent operations. How you implement these protections will depend on your specific infrastructure and risk profile.

    Consideration 6: Remember That Human Beings Are Always the Best Defense

    Nothing protects you better than a knowledgeable, well-trained team. Agentic AI is most effective when enhanced by human beings who offer the human context that the technology cannot—and will never—fully understand. Any effort to automate procurement processes will benefit from human oversight to approve, enhance, or deter certain actions at critical moments.

    The preceding guide is meant to get you thinking. It is not intended as an operational plan or comprehensive security framework. If you are looking for a more formal framework, OWASP has a great report on Agentic AI List Threats and Mitigations. Please also feel free to reach out to me, and I will try to answer any questions you may have. I have tried to be as specific as possible, but I know that there are always more requirements that need to be addressed. For now, I want to emphasize again that there is nothing new about your core concerns and that security measures should always be implemented alongside broader governance controls, which we will discuss in the next installment.

    Go Deeper

    Additional Resources