By implementing methods from explainable artificial intelligence, engineers can enhance users’ trust in predictions made by AI models. This strategy was recently evaluated in the context of wind energy production.
Explainable artificial intelligence (XAI) is a field within AI that allows users to gain insights into the inner workings of AI models, helping them understand how outputs are produced and whether the predictions can be deemed reliable. Recently, the relevance of XAI has increased in areas like computer vision, particularly in image recognition, where understanding the choices made by models is essential. After its success in this area, XAI is now being progressively applied to different domains that necessitate trust and transparency, such as healthcare, transportation, and finance.
Researchers from EPFL’s Wind Engineering and Renewable Energy Laboratory (WiRE) have adapted XAI for the opaque AI models utilized in wind power forecasting. A study published in Applied Energy suggests that XAI aids in clarifying wind energy predictions by shedding light on the sequence of decisions made by these models, while also highlighting which factors should be included as inputs.
“For grid operators to successfully incorporate wind energy into their smart grids, they require dependable daily forecasts of wind power generation with minimal error margins,” states Prof. Fernando Porté-Agel, head of WiRE. “When forecasts are inaccurate, grid operators often have to make last-minute adjustments, frequently resorting to pricier fossil fuel energy sources.”
Improving prediction credibility
The forecasting models currently employed for wind power output rely on fluid dynamics, weather projections, and statistical techniques, yet they still exhibit a significant margin of error. AI has allowed engineers to refine these predictions by analyzing vast datasets to uncover correlations between weather model variables and wind turbine outputs. Nevertheless, many AI models operate as “black boxes,” complicating the process of deciphering how specific predictions are made. XAI resolves this challenge by enhancing transparency in the modeling processes that inform forecasts, leading to more trustworthy and dependable predictions.
Identifying key variables
For their research, the team trained a neural network by selecting key input variables from a weather model that significantly impact wind power generation, including wind direction, wind speed, air pressure, and temperature, along with data collected from wind farms in Switzerland and elsewhere. “We customized four XAI techniques and created metrics to assess the reliability of a technique’s interpretation,” explains Wenlong Liao, the study’s lead author and a postdoctoral researcher at WiRE.
In the field of machine learning, metrics are tools that engineers use to measure model effectiveness. For instance, they can determine whether the relationship between two variables indicates causation or merely correlation. These metrics are tailored for specific uses—such as diagnosing health issues, evaluating traffic congestion impact, or assessing a company’s stock market value. “In our study, we established various metrics to gauge the credibility of XAI techniques. Importantly, reliable XAI methods can identify which variables to include in our models for accurate forecasting,” Liao adds. “We even discovered that certain variables could be omitted without sacrificing accuracy.”
Increasing competitiveness
Jiannong Fang, an EPFL researcher and co-author of the study, believes these insights could enhance the competitiveness of wind energy. “Power system operators are unlikely to embrace wind energy if they lack understanding of the underlying principles guiding their forecasting models,” he notes. “However, with an XAI-informed approach, these models can be assessed and improved, thus yielding more reliable predictions of daily fluctuations in wind energy.”