Let’s recap what we understand by the term artificial intelligence. The term has become as ubiquitous as “computing” or “data processing” were thirty years ago, or “business intelligence” was twenty years ago, or “big data” and “cloud” just a few years ago. What people mean by AI has certainly become broad, diluted, and often misused, especially as it gets applied to a wide range of technologies across industries. Artificial intelligence in the narrower sense typically refers to machine-displayed intelligence that emulates human cognitive functions, such as problem-solving and decision-making. Modern AI often combines machine learning and deep learning techniques, training on large datasets to assist or automate decision-making processes. However, today, the term “artificial intelligence” is frequently used as a catch-all for technologies ranging from predictive text and chatbots to deep neural networks and autonomous agents, many of which are only loosely connected to the original notion of “intelligence”.
What is Predictive AI?
Predictive analytics, often referred to as predictive AI, remains a foundational and mature component within the artificial intelligence landscape. According to Gartner’s 2024 Hype Cycle for Artificial Intelligence, predictive analytics is considered a traditional AI capability that is increasingly being integrated into enterprise application suites such as ERP, CRM, digital workplace, supply chain, and knowledge management systems. These integrations aim to derive more insights from data within such applications.
While the Gartner report does not specify the exact position of predictive analytics on the hype cycle graph, its widespread adoption and integration into enterprise systems suggest that it is beyond the initial phases of the cycle. Typically, technologies that have matured and are widely adopted reside on the “slope of enlightenment” or have reached the “plateau of productivity”, indicating a stable and productive phase in their lifecycle.
In contrast to emerging technologies like generative AI, which are currently experiencing heightened expectations and scrutiny, predictive analytics has established itself as a reliable tool for forecasting and decision-making across various industries. Its continued integration into enterprise applications underscores its value in enhancing data-driven insights and operational efficiency.
To illustrate the power and limitations of predictive analytics, let’s take a topic that was in the news recently. In power generation, predictive analytics can be deployed to help prevent outages, such as the one that occurred in Spain on April 28. Many grid operators globally are already investing in this area, but gaps remain in deployment, integration, and real-time responsiveness.
Predictive AI helps, first, with equipment failure prediction. By analyzing sensor data from transformers, turbines, or circuit breakers, predictive models can detect wear, overheating, or anomalies well before a failure occurs. This allows maintenance engineers to schedule interventions more efficiently, making better use of time and resources. Second, predictive models can analyze usage patterns and external data (such as meteorological reports) to anticipate demand spikes, for example caused by heatwaves or cold snaps, and help balance load more accurately between regions or generation sources. This is particularly useful with regard to renewable energy variability. For solar and wind, predictive analytics can forecast generation based on weather conditions, allowing better planning and use of storage or backup power. And fourth, AI can simulate scenarios based on historical data to predict instability or cascading failures, enabling pre-emptive action such as load shedding or rerouting power.
However, predictive AI has its limitations. Its effectiveness hinges on access to timely, high-quality data, but data silos and legacy infrastructure often constrain visibility across the grid. In many regions, integration between national grids and private operators is inconsistent, resulting in fragmented datasets and blind spots that can degrade the accuracy of predictions.
Moreover, predictive models typically rely on historical patterns, which means they may struggle to anticipate extreme weather events or unprecedented surges in demand. These anomalies are becoming more frequent due to climate change and can overwhelm even optimized systems. Likewise, cyber risks, whether from attacks or compromised data streams, can disrupt inputs and reduce trust in automated decision-making.
For predictive AI to fulfil its potential, it must be backed by robust data integration, resilient infrastructure, and layered contingencies (including human oversight) to ensure reliable outcomes in high-stakes environments like power generation.
What is Generative AI?
Generative AI refers to a subset of artificial intelligence that focuses on creating new content. This can include text, images, music, code and video content. It does this by learning patterns from existing data. Unlike traditional AI systems that are primarily designed for classification, prediction, or decision-making, generative AI is a relatively new phenomenon, and this is reflected in the Gartner hype cycle: “hype abounds”! Gartner points out that the term GenAI covers many different technologies and applications. “The excitement over, and volume of, available technologies and techniques can make it difficult for AI leaders to navigate what will have the most business impact in the immediate and farther-out future,” the research states.
In short, this subset of AI is largely still an area of innovation. Few specific applications have passed the “peak of inflated expectations” to reach the “trough of disillusionment”. In other words, it will be some time before we can take a balanced view of some areas of development in GenAI.
Nevertheless, GenAI is beginning to play a valuable, if still emerging, role in the electricity sector. Utilities and grid operators are using it to streamline internal processes, such as drafting outage reports, summarizing sensor logs, generating compliance documentation, and even simulating incident response scenarios. Customer-facing applications include automated responses to service queries, personalized energy usage reports, and multilingual communication tools that can adapt messaging across demographics.
On the operational side, some firms are experimenting with GenAI to generate synthetic datasets for training predictive models where real-world data is limited or too sensitive to use directly.
However, GenAI also has its limitations. Its outputs are only as reliable as the data and prompts it receives, whereas accuracy, explainability, and auditability are critical concerns in such a highly regulated industry. Human beings must remain accountable to regulators and the public for problems such as outages; it’s simply not acceptable to blame the software.
Unlike predictive models that are grounded in statistical learning, GenAI is not inherently designed for precision; it can confidently generate plausible but incorrect, content (so-called “hallucinations”). This presents a risk in contexts where misinformation could have safety, financial, or compliance consequences. In addition, integrating GenAI safely into critical infrastructure workflows, in which where traceability and accountability are paramount, requires careful governance, human validation, and often custom fine-tuning.
While GenAI technology holds promise, particularly for augmenting human productivity and enhancing communication, it is best deployed today as a support tool, not a decision-maker in core operational systems such as electrical energy generation and distribution.
Generative AI versus Predictive AI in Specific Industries
Predictive and generative AI models are having a huge impact on many industries. Here are a few examples from manufacturing, higher education and the public sector:
Predictive AI in Automotive Manufacturing
- Predictive maintenance: AI models analyze sensor data from machinery and assembly lines to predict equipment failures before they occur, reducing downtime and maintenance costs.
- Demand forecasting and inventory optimization: Manufacturers use predictive analytics to anticipate vehicle demand by model, region, and season, helping streamline parts procurement and minimize overstock or shortages.
- Supply chain risk assessment: AI systems evaluate supplier performance, geopolitical risk, and logistics data to forecast potential disruptions, thereby enabling proactive mitigation strategies (such as finding alternative suppliers or adjusting delivery schedules).
Generative AI in Automotive Manufacturing
- Design and prototyping: Engineers increasingly use GenAI to generate and iterate design concepts, from vehicle exteriors to interior ergonomics, by feeding in performance goals, cost constraints, and aerodynamic requirements.
- Technical documentation and training: GenAI helps generate repair manuals, safety instructions, and training content for technicians, often customized for different markets or experience levels.
- Procurement automation: In supply chain functions, GenAI is being used to draft RFPs, supplier evaluations, and contract summaries, accelerating cycles and improving consistency across sourcing events.
Predictive AI in Higher Education
- Student retention and success: Institutions use predictive analytics to identify at-risk students by analyzing data such as attendance, grades, LMS activity, and demographic factors. This enables timely interventions such as academic advising or tutoring support.
- Course demand forecasting: AI models analyze enrolment trends and program popularity to predict course demand, helping procurement professionals and faculty to purchase and use resources more efficiently.
- Facilities and energy management: Predictive systems help optimize campus operations, from forecasting building occupancy to managing energy usage and maintenance schedules proactively.
Generative AI in Higher Education
- Academic support tools: GenAI is being used to provide writing assistance, automated summarization of lecture notes, and interactive tutoring for subjects such as coding or essay structure, freeing up faculty for higher-value teaching.
- Administrative content generation: Universities can deploy GenAI to draft policy documents, compose donor communications, or summarize long reports for executive review, improving speed and consistency.
- Personalized student engagement: Chatbots powered by generative AI can generate tailored responses to student enquiries, guide them through enrolment or financial aid processes, and provide 24/7 support in multiple languages.
Predictive AI in the Public Sector
- Resource allocation: Predictive models can be used to forecast demand for public services, such as healthcare, education, or emergency services, so departments can allocate budgets and staff more effectively.
- Fraud detection in procurement: AI systems analyze historical procurement data, vendor patterns, and pricing anomalies to flag suspicious transactions or high-risk suppliers, improving compliance and transparency.
- Policy impact forecasting: Civil service teams increasingly use simulation-based predictive models to anticipate the outcomes of new policies or regulations, particularly in areas like housing, employment, or energy.
Generative AI in the Public Sector
- Document drafting: GenAI assists in writing policy briefs, consultation summaries, and legislative drafts, reducing time-to-delivery and improving readability for public engagement.
- Citizen engagement and communications: Chatbots and GenAI tools are being used to draft responses to public queries, translate content into plain language, or personalize information for different citizen groups.
- Procurement support: Generative AI can assist procurement teams by drafting tender documents, summarizing supplier responses, and preparing evaluation reports, which are especially helpful in complex, multi-agency procurements.
Challenges and Ethical Considerations
Artificial intelligence has its critics, and notably when it comes to ethical considerations such as privacy, bias and fairness. Let’s look at these in turn.
Data Privacy Concerns with Predictive Analytics
Predictive models often rely on personal data (e.g. academic records, health info, purchasing history). If this data is insufficiently anonymised or poorly governed, it can expose individuals to privacy risks. Even anonymized data can be used to make inferences about individuals, sometimes revealing protected characteristics (such as ethnicity or health status) that were not explicitly collected.
There’s also a tendency to keep data “just in case”. However, this increases the risk of function creep, or using data for purposes beyond the original intent, possibly without user consent. GDPR and similar regulations guard against this, but there is no guarantee that companies will adhere to them at all times.
Bias and Fairness Issues with Predictive Analytics
It is claimed that predictive models trained on historical data may reinforce existing inequalities (for example in hiring, admissions, or policing) if past decisions reflected bias. There is plenty of evidence to support this. For example, a study by the Institute for Higher Education Policy found that predictive algorithms used to assess student success can be racially biased. Specifically, these models were more likely to predict failure for Black and Latino students who ultimately succeeded, compared to their White and Asian counterparts.
AI may unintentionally use correlated variables (like postcode or language) as proxies for protected attributes, leading to disparate impact. Moreover, when models are complex or not interpretable, it becomes difficult to challenge unfair outcomes or explain why a decision was made.
Data Privacy Concerns with Generative AI
GenAI models are often trained on proprietary, confidential, or personally identifiable information. There is a risk that they may leak private details in their outputs, especially if they are trained on public data, for example on the web.
There are growing concerns around whether individuals or institutions consented to their data being used for model training, especially in education, healthcare, and public sector contexts.
Bias and Fairness Issues with Generative AI
GenAI models trained on web-scale content may reproduce stereotypes or misinformation, including biased representations of gender, race, religion, or disability. A study analyzing images generated by tools such as Midjourney, Stable Diffusion, and DALL·E 2 found systematic gender and racial biases. For instance, women were often depicted as younger and happier, while men appeared older and more neutral or angry. Additionally, there was a notable underrepresentation of African Americans. This article in Bloomberg highlighted that generative AI models, such as Stable Diffusion, can take existing racial and gender stereotypes to extremes, producing outputs that are more biased than real-world data.
When GenAI tools fabricate information, they often do so in a confident tone, which can mislead users, especially in contexts that involve legal, medical, or educational authority. This phenomenon has become known as “hallucinated authority”.
Finally, GenAI systems may perform poorly in minority languages or dialects, reinforcing inequality in access to digital services and communication.
Conclusion – Migration Strategies Are Needed
Predictive AI (or predictive analytics) and generative AI are very different branches of artificial intelligence. The former has been around longer, and the use cases are therefore more proven. The latter offers great potential, but the technologies and applications are newer, and we do well to guard against some of the hype.
While both offer huge benefits across different industry sectors, there are serious challenges that must be addressed. Robust data governance is required. Organizations must define clear boundaries for what data is collected, how it’s used, and how long it’s stored. They should also conduct bias audits and fairness testing. Tools and frameworks are available (e.g. Aequitas, Fairlearn) to measure and mitigate disparate outcomes.
Especially in high-stakes settings, final decisions should be subject to human oversight to ensure accountability — so-called human-in-the-loop (HITL) interactions with machine learning.
Finally, organizations should implement interpretable models where possible and provide users with clear rationale for AI-assisted decisions or outputs. This is especially challenging in the deployment of GenAI for some applications in highly regulated industries. For example, GenAI might produce recommendations to investors based on historical data that turn out to be erroneous. Who is to blame if the investments fail badly?