In earlier articles in this series on GenAI we have discussed its benefits and how it differs from other forms of artificial intelligence, as well as how in future these will converge to create highly sophisticated solutions, not least in procurement. We also pointed out some of the disadvantages of GenAI. Not the least of these are the cybersecurity risks.
Securing AI in Procurement – the Risks
Let’s examine these risks and consider their possible impact on procurement. CPOs, while exploiting the undoubted benefits of GenAI, must work with cybersecurity experts to address concerns around data privacy, the potential for biased outcomes, and intellectual property issues.
Perhaps the risk that gains most media attention is that of deepfakes and misinformation. GenAI can create hyper-realistic fake images, videos, or audio to impersonate individuals, notably celebrities. The motives for doing so are sometimes malicious, sometimes playful. But the impact of deepfakes is far more profound than mere entertainment or pranks. They spread false narratives that can manipulate public opinion or influence decision-making. This poses risks to personal and corporate reputation and personal security. Deep fakes can lead to identity theft, financial fraud, or even political instability. Imagine what a deepfake video of the CEO issuing a fake profit warning or job cuts could do to an organization’s stock price or how it might panic employees and stakeholders. Procurement, too, is vulnerable. Imagine the impact of a deepfake video of the Chief Procurement Officer falsely stating that their company was terminating its relationship with a major supplier, or boycotting goods from a certain country.
Phishing and social engineering also pose a threat to procurement. In this context social engineering means exploiting human trust, which could lead to financial losses or compromised systems. Attackers might impersonate legitimate suppliers, request changes in payment details, or lure employees into clicking malicious links, all leading to unauthorized access or fraudulent transactions. AI-generated phishing emails or messages mimic legitimate communications (for example, mimicking a colleague’s or a supplier’s writing style) and evade traditional detection tools.
Malicious code generation via generative AI could hijack procurement software or disrupt production systems, with severe financial and operational consequences. Cybercriminals could use GenAI to craft sophisticated malware tailored to exploit vulnerabilities in procurement platforms. This malware might, for example, manipulate payment instructions by altering a vendor’s bank account details in invoices to divert funds to the criminals’ accounts. Non-AI attacks like the 2016 Bangladesh Bank heist (when $81M was stolen via SWIFT system manipulation) show how payment systems are targeted. GenAI could automate and scale such attacks. AI-generated code could also cancel or reroute orders, falsify signals to suppliers, halt raw material deliveries, or redirect shipments to fraudulent addresses. By exploiting vulnerabilities in ERP/MRP systems, criminals could trigger artificial shortages by overordering or underordering supplies, disrupting just-in-time production. In 2020, a Tesla supplier was hit by ransomware, causing production delays. The worry is that GenAI could enable subtler, harder-to-detect sabotage.
Cybercriminals often seek to bypass approval workflows. They will use GenAI to generate fake approvals or exploit weak authentication (such as mimicking authorized users via AI-generated emails). AI-generated code could be developed to bypass signature-based antivirus systems.
Even without the direct and deliberate intervention of cybercriminals, GenAI may inadvertently reveal sensitive training data (including personally identifiable information (PII) and trade secrets) in outputs—a risk called model leakage or unwanted memorization. A poorly trained language model might unknowingly encode trade secrets shared by a supplier in its text output, such as confidential information, including intellectual property. This could then be leaked to a competitor. Crafted inputs can trick AI systems into revealing confidential data, bypassing filters, or executing unauthorized commands, such as extracting internal system details. Even if you take measures to ensure that your own security is tight, weaknesses in third-party plugins, open-source models, or training datasets can introduce backdoors to the hacker.
Finally, GenAI can generate its own security risks. Hallucinations (false AI-generated content) can lead to flawed business decisions, such as the inappropriate award of a purchasing decision, as well as legal violations and reputational harm if unchecked.
Consequences of Security Breaches
The consequences of the security breaches described above are manifold, and procurement function is a likely target for the simple reason that it is responsible for huge outflows of funds. Attackers can trick procurement staff into authorizing wire transfers to fraudulent bank accounts, leading to substantial financial losses for the organization.
However, direct financial losses are only the start. Phishing can lead to the compromise of sensitive procurement data, including supplier information, contract details, and financial records. Attackers may target trusted suppliers or vendors, potentially introducing malware or launching phishing attacks within the supply chain, impacting the entire organization. A successful phishing or social engineering attack can damage an organization’s reputation, particularly if it involves a breach of trust or a public disclosure of sensitive information.
Attackers impersonate trusted business partners or suppliers, requesting changes in bank details or authorizing fraudulent payments, as seen in the case of Cabarrus County, New Carolina, which suffered a $2.5 million loss of funds intended for the construction of a new high school, due to a business email compromise (BEC) scam.
There have been cases where cybercriminals use AI-generated invoices to mimic a trusted supplier. A slight change in the IBAN number might evade automated checks and only be noticed by an eagle-eyed accountant. According to the German Bundeskriminalamt (BKA), in 2022 criminals compromised a supplier’s email, then used ChatGPT to draft “urgent payment update” requests in fluent German. The victim, an automotive manufacturer, transferred €320K to a Lithuanian account before detection.
The challenge is that today’s cybercriminals are highly tech-savvy and are using GenAI tools such as IBAN-Gen, a malicious LLM plugin that can create thousands of plausible IBANs. They always seem to be one step ahead, especially as ERP systems lack AI-verification: they can validate IBAN formats but not account legitimacy. AI-generated scams exploit rushed approvals, so the fraud is only noticed after payment has been made.
Establishing Robust Security Protocols & Mitigation Strategies
As GenAI becomes more pervasive, organizations will need to adapt. Robust security protocols reduce the risk of bad things happening, while mitigation strategies minimize or contain the damage when they do. Procurement leaders are advised to work closely with their colleagues in cybersecurity. We set out some key actions below, but if you are looking for a deeper understanding of the issue there are some great references online such as the Online Worldwide Application Security Project (OWASP) Top Ten for LLM Applications. You can also find reports by specialist vendors such as Palo Alto Networks and NTT Data.
First, you should establish a clear governance framework. Not sure where to start? Don’t worry, there is plenty of free advice available online. You could start with NIST’s AI Risk Management Framework (RMF). Ensure that all employees are educated on AI risks and secure usage protocols. Your cybersecurity team should deploy AI-powered detection for adversarial attacks. Make sure they are aware of any particular exposures and vulnerabilities in the procurement space.
Human oversight and intervention is vital with GenAI. GenAI gets things wrong from time to time. Requests for information and action usually require some element of refinement. It’s an iterative process. Human-in-the-loop (HITL) systems are characterized by their requirement for human interaction at critical points in the AI’s decision-making process and in reviewing AI outputs.
Organizations using any kind of IT today should build a zero trust architecture: Strict access controls and multi-factor authentication (MFA) for payment/order changes. Training data for GenAI systems should be encrypted and/or anonymized.
Defences against invoice fraud must combine procedural, technological and human controls. IBAN white-listing best practices involve locking vendor bank accounts in ERP systems and requiring manual override for changes. Tools such as Darktrace and Vectra AI flag subtle inconsistencies, for example new IBANs for known vendors. Blockchain verification, using smart contracts to authenticate invoices, may emerge in the coming years.
Some old-fashioned best practices remain essential, such as mandating dual signatures for payment changes, calling back suppliers to verify critical changes via pre-registered phone numbers (not email) and, again, staff training. Employees must be required to inspect all invoice fields. AI scams often alter only one or two values.
Above all, procurement professionals must have robust procedures in place for vetting suppliers, ensuring that they comply with all the relevant legal requirements, are financially stable and have a clean track record. But this is more than a one-off box-ticking exercise; regular auditing and monitoring are essential to stay safe.
Vendor Assessment and Collaboration
It goes without saying that it’s vitally important to choose reliable vendors, but some considerations can often be overlooked. Key criteria include financial health, performance history and ethical and ESG compliance. But in the era of GenAI, you must take extra special care to satisfy yourself that a vendor’s cybersecurity posture is as strong as yours.
So, audit vendors that provide AI-driven services for practices like data encryption, breach response plans, and AI model security including anti-poisoning/data sanitizing measures. Tools such as robust principal component analysis (RPCA) separate clean data from potential poison; tools such as Fiddler AI provide alerts when LLM and other model behavior deviates due to poisoned inputs. For strategic suppliers, you should require certifications such as SOC 2 Type II or MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) alignment.
Questions that you and your cybersecurity colleagues should put to suppliers that use, or are exposed to, GenAI include, “How do you validate training data integrity?” (they should be using anomaly detection and/or third-party audits); and “Do you employ adversarial testing?” (guidance is provided in NIST AI RMF); and “Can you share data provenance records?”
Developing a Comprehensive Security Policy
The best defense against AI-generated threats is AI itself. Machine learning algorithms can analyze vast data sets to identify anomalies and detect potential security breaches that humans might miss, in real time. But AI protection is not enough. Policies, processes and human intervention are also important, as are regular staff training and updates.
The cybersecurity community is growing and a great asset. Work with your suppliers’ IT and cybersecurity experts. This will facilitate better information exchange and threat intelligence sharing. A comprehensive and proactive approach to cybersecurity can significantly reduce response times and limit the damage caused by AI-driven attacks.
Leveraging Technology for Enhanced Security
While it would be inappropriate to promote any particular cybersecurity tools or solutions (apart from our own!) we can set out some capabilities that should be covered:
Advanced threat detection & behavioral analysis: The solution should analyze network traffic, user behavior, and endpoint activities to identify anomalies and deviations from baselines (such as unusual login times and data exfiltration attempts). Look for solutions that use machine learning to detect previously unknown threats (for example, polymorphic malware and novel attack vectors) without relying solely on signature-based methods.
Real-time response & automation: To reduce mean time to respond (MTTR), solutions should provide capabilities such as quarantining infected devices, blocking malicious IPs, or revoking access autonomously
Comprehensive visibility & unified monitoring: This should include cross-platform coverage with the ability to monitor endpoints, cloud environments, identities, and networks from a single dashboard, together with threat intelligence feeds from third-party databases.
Human oversight: The solution should provide clear rationale for alerts and facilitate HITL manual review to prevent AI errors.
Continuous learning: The AI should evolve with new threat data, adapting to emerging tactics (such as AI-generated deepfakes and adversarial prompts).
Low false positives: The best solutions balance sensitivity with precision to avoid alert fatigue. Natural language processing software (NLP) can help filter out benign anomalies.
Compliance & privacy safeguards: The solution should offer data encryption & anonymization to ensure that sensitive data used for AI training is protected against leaks. It should have built-in support for frameworks such as GDPR, NIST AI RMF, or ISO 42001 (for AI-specific governance).
Proactive risk mitigation: Today’s predictive analytics use historical data to forecast attack trends. Attack surface management scans guard against vulnerabilities in external-facing assets such as APIs.
There are also a few red flags you should watch out for when selecting cybersecurity vendors. These include over-reliance on static rules, i.e., solutions that lack adaptive ML models. These will struggle against evolving threats. Poor integration with existing systems such as SRM and procurement is an absolute no-no, as this will create silos and gaps. Proprietary ecosystems may limit flexibility.
What Does the Future Hold for Cybersecurity?
Cybersecurity will evolve as new threats emerge. GenAI and other forms of artificial intelligence will play a central role in foreseeing and mitigating potential threats. Alongside technological developments, ethical considerations and the creation of robust AI governance frameworks will be crucial in all areas of business and especially functions that by definition work with the external environment, such as procurement and supply chain management.
Businesses must stay ahead of potential attackers and internal risks by embracing emerging technologies and fostering a culture of continuous improvement. Collaboration across industries and with governmental agencies will be vital in developing comprehensive cybersecurity strategies.
Conclusion
Generative AI brings many benefits to organizations, not least their procurement teams, but it also lowers the barrier for highly targeted, automated attacks on procurement and production systems. Proactive defense-in-depth measures are essential.
When selecting cybersecurity solutions, you should shortlist those that blend AI innovation with operational practicality—automating the grunt work while empowering human analysts with actionable insights. For deeper vendor-agnostic frameworks, you should refer to standards such as NIST’s AI Risk Management Framework (RMF).