Letter The following article is Open access

Short-term solar irradiance forecasting using convolutional neural networks and cloud imagery

, and

Published 1 April 2021 © 2021 The Author(s). Published by IOP Publishing Ltd
, , Citation Minsoo Choi et al 2021 Environ. Res. Lett. 16 044045 DOI 10.1088/1748-9326/abe06d

Download Article PDF
DownloadArticle ePub

You need an eReader or compatible software to experience the benefits of the ePub3 file format.

1748-9326/16/4/044045

Abstract

Access to accurate, generalizable and scalable solar irradiance prediction is critical for smooth solar-grid integration, especially in the light of the accelerated global adoption of solar energy production. Both physical and statistical prediction models of solar irradiance have been proposed in the literature. Physical models require meteorological forecasts—generated by computationally expensive models—to predict solar irradiance, with limited accuracy in sub-daily predictions. Statistical models leverage in-situ measurements which require expensive equipment and do not account for meso-scale atmospheric dynamics. We address these fundamental gaps by developing a convolutional global horizontal irradiance prediction model, using convolutional neural networks and publicly accessible satellite cloud images. Our proposed model predicts solar irradiance in 12 different locations in the US for various prediction time horizons. Our model yields up to 24% improvement in an hour-ahead predictions and 26% in a day-ahead predictions compared to a persistence forecast. Moreover, using saliency maps and target-location-focused cropping, we demonstrate the benefits of incorporating meso-scale atmospheric dynamics for prediction performance. Our results are critical for energy systems planners, utility managers and electricity market participants to ensure efficient harvesting of the solar energy and reliable operation of the grid.

Export citation and abstract BibTeX RIS

1. Introduction

Harnessing solar energy is essential for paving the path toward energy sector decarbonization, a necessary step in reducing the adverse environmental and health impacts of fossil-fuel-based electricity production and decelerating climate change (Rai and Sigrin 2013). Global photovoltaic solar energy production has increased from 994 GWh in 2000, to 443 554 GWh in 2017 (IEA 2019). This rapid increase in capacity requires accurate forecasting of solar production at all levels, from rooftop to utility-scale deployment, to ensure smooth integration and reliable operation of the grid (Lorenz et al 2014, Golestaneh et al 2016). Short-term, hour- and day-ahead forecasts of renewable energy are critical for the efficient and cost effective operation of the electricity grid (Rachunok et al 2020). Specifically, short-term solar power plant predictions allow utilities and electricity market operators to make informed decisions for scheduling reserve capacity and designing efficient bidding strategies for hour-ahead and day-ahead wholesale power markets. Medium- and long-term forecasts are vital for understanding the strategic value of solar panel deployment from small-scale roof tops to grid-scale solar projects (Antonanzas et al 2016, Kaur et al 2016).

Solar energy production is predominantly a function of the characteristics of the solar panel as well as the amount of solar radiation captured (Antonanzas et al 2016). Therefore, access to accurate solar irradiance forecasts is necessary for energy systems professionals to be able to estimate solar power production at various future time horizons. Solar irradiance is typically estimated via the global horizontal irradiance (GHI), which measures the total radiation (in W m−2) from the Sun on a horizontal surface on the earth (Sengupta et al 2018). As such, GHI prediction is an integral component of forecasting solar energy output.

Previous work on predicting GHI can be classified into physical and statistical models (Antonanzas et al 2016). Physical models are based on theoretical understandings of light transmission to estimate irradiance (AghaKouchak and Nakhjiri 2012). The simplicity of theoretical transmission models makes them highly generalizeable, as irradiance is calculated based on information such as wind speed, oxygen levels, ground angle, cloud coverage, and elevation (Antonanzas et al 2016). However, these theoretical models lack the specificity required for accurate GHI forecasting (Dolara et al 2015). Another class of physical models is based on the numerical weather prediction methods that use computationally expensive simulations, which are typically run on supercomputers, with limited accuracy in sub-daily predictions (Letendre et al 2014).

Statistical models learn from historical observations and do not leverage physics-based knowledge. Instead, they are empirically driven, and predict future GHI values using statistical learning methods trained on historical data (Yang et al 2018). Statistical solar irradiance prediction methods can be sub-categorized into two paradigms: endogenous and exogenous. Endogenous models generally require ground GHI data as input, as measured by both a pyranometer and a pyrheliometer (Fuquay and Buettner 1957, Kerr et al 1967). Due to requiring the physical deployment of ground instrumentation, endogenous statistical models lack the ability to spatially generalize to locations without in-situ ground GHI measurements. Contrary to endogenous models, exogenous models utilize non-GHI data sources to predict GHI values. The current state-of-the-art in exogenous modeling uses total-sky imager—ground captured sky images (Feng and Zhang 2020). This procedure takes pictures from the ground at a specific location and utilizes a convolutional neural network to analyze captured image data to predict GHI. Other examples include recent works by Feng et al (2017), Crisosto et al (2018), Kamadinata et al (2019), Ryu et al (2019), Jiang et al (2020), Le Guen and Thome (2020), all of which involve training accurate artificial neural networks (ANNs)-based models, using ground-based cloud images, to predict GHI (the two papers are hybrid models which use endogenous GHI data as well as exogenous cloud images). However, the prediction time horizons for all these works range from 0 to a maximum of 1 h. Moreover, while classified as exogenous, this approach still requires in-situ hardware deployment to photograph the sky. Thus, it is subject to the same limitations as endogenous predictions.

Moreover, GHI is affected by both point-specific and meso-scale dynamics in atmospheric dynamics conditions (Fuquay and Buettner 1957). The majority of the current GHI prediction models do not incorporate micro- and meso-scale atmospheric dynamics at a wide scale (Antonanzas et al 2016). A notable exception is a recent model proposed by Jiang et al (2019) which incorporates meso-scale atmospheric dynamics together with point-specific locational and temporal information to estimate hourly global solar radiation in China, utilizing a deep neural network. However, their estimates are based on real-time input data and thus not suitable for hour- and day-ahead forecasting purposes. Koo et al (2020) also harnesses an exogenous approach for GHI prediction. Specifically, they train an ANN model using satellite images together with solar zenith angles and hour angles to make hourly estimations of solar energy in Korea. However, their estimation is for hourly sums instead of point estimations, and it also requires real-time data.

In this paper, we propose an exogenous approach: the convolutional global horizontal irradiance (C-GHI) model to predict GHI based on freely available cloud images obtained from geostationary satellites. This approach requires no physical instrumentation, and is easily generalizable across regions. C-GHI is based on convolutional neural networks—a non-linear deep learning technique that can learn complex information from image data. C-GHI overcomes the limitations of current statistical and physical models by accurately predicting GHI using entirely exogenous data—enabling a fully remote, worldwide GHI estimation technique, with prediction quality similar to in-situ methods. Moreover, using data-preprocessing and model inferencing techniques in a novel way, we demonstrate the benefits of leveraging a holistic approach and accounting for meso-scale dynamics in solar irradiance predictions, and illustrate model accuracy trends as a function of variable temporal prediction horizons. The structure of the paper is as follows. Section 2 summarizes the input data used in the analysis. Section 3 outlines our proposed approach. Finally, results and conclusions are presented in sections 4 and 5, respectively.

2. Data

The data utilized in this study consists of satellite images of the continental United States (CONUS) and GHI measurements from the Cooperative Network for Renewable Resource Measurements (CONFRRM). Section 2.1 describes the satellite imagery data, section 2.2 describes the solar irradiance data, and section 2.3 outlines pre-processing of the data for the analysis.

2.1. Satellite imagery

We used geostationary satellite imagery captured by NOAA's Geostationary Operational Environmental Satellite 8 (GOES8) infrared channels. The satellite imagery data is maintained by the Satellite Data Services (SDS) group at the University of Wisconsin-Madison Space Science and Engineering Center. The data was collected from Multi-format Client-agnostic File Extraction Through Contextual HTTP, a web-based API, maintained by SDS (SSEC UW-Madison 2020). Each GOES 8 image has a 2125 × 825 pixel resolution per channel. In this study, 4 infrared channels are used: channels 2–5, which operate on wavelengths: 3.9, 6.8, 10.7, and 12 µm. The spatial resolutions vary between 4 and 8 km. Figure 1 shows a sample of the input data from all 4 channels.

Figure 1.

Figure 1. Full resolution GOES8 CONUS infrared satellite images. From (a) to (d), they are channel 2 to channel 5 imagery taken on 15 December 1999 at 19:30 GST.

Standard image High-resolution image

2.2. Solar irradiance

We collected solar irradiance data for 12 different stations in 7 states throughout the CONUS (figure 2): Florida (FL), Georgia (GA), Mississippi (MS), New Mexico (NM), North Carolina (NC), Texas (TX), and West Virginia (WV). Details of the 12 stations are given in table 1. The irradiance data have a 5 min temporal resolution, with the exception of Cape Canaveral FL data, with a 6 min temporal resolution. The geographic characteristics of the study sites are summarized in table 1 and figure 2, respectively. Moreover, the correlation of GHI between different case study sites are depicted in figure 3.

Figure 2.

Figure 2. The locations of case study sites on a continental U.S. satellite imagery.

Standard image High-resolution image
Figure 3.

Figure 3. Correlation of GHI values between case study sites.

Standard image High-resolution image

Table 1. Detailed information about the case study sites.

CodeNameLat. (N)Lon. (W)Alt. (m)Data size (month)
FSFlorida Solar Energy Center28.3980.7618.124
BCBethune-Cookman College29.1881.0220.035
SSSavannah State College, GA32.0381.0711.024
MVSavannah State College, MI33.0590.3352.024
STSouthwest Technology Development Institute32.27106.741200.915
ECElizabeth City State University36.28176.2226.036
AUThe University of Texas Austin30.2997.7421.014
CNWest Texas A&M University34.99101.901066.815
EDUniversity of Texas Pan America26.2098.2230.013
EPUniversity of Texas an El Paso31.80106.401219.015
CLLyndon B. Johnson Space Center29.5695.1233.015
BSBluefield State College37.2781.24803.036

Data is available from the start of 1996 until the end of 2012. To demonstrate the applicability of our proposed approach, we train and validate our models using data from 1999 to 2001 as it is the time period with the most complete data across all the sites. However, the model can easily be extended to more recent years, which will likely result in higher predictive accuracy owing to improved resolution of satellite imagery over time.

2.3. Data processing

Both GOES8 images and GHI data are preprocessed to facilitate statistical modeling. GOES8 images are downscaled into 256 × 256 resolution using bicubic interpolation to reduce computation time. All 4 image channels are then converted into one 256 × 256 × 4 tensor (i.e. matrix)—each entry in the tensor corresponds to one pixel in each of the 4 channels. Input data are also normalized to values in [0,1]. Missing data—caused by technical failure, regular maintenance of GOES8 systems, or missing/cracked imagery—are removed. In preprocessing of GHI data, night-time values—GHI values less than 10 W m−2—are removed. Similarly, unreliable irradiance measurements are removed based on quality control flags 4 provided by NREL's CONFRRM data (SERI 1988).

3. Methods

The proposed C-GHI prediction model is grounded in convolutional neural networks. In this section, we provide a brief overview of the model formulation (section 3.1), the technical details needed to replicate our experiment (section 3.2), model inferences (section 3.3), and measures of model performance (section 3.4).

3.1. Convolutional neural networks

ANNs are a statistical learning technique inspired by the way neurons work in the human brain. They consist of a series of neurons (i.e. nodes) which activate each other in parallel or in sequence. Each neuron and its connections are called a perceptron. Mathematically, this consists of a combination of multivariate linear regressions followed by a non-linear transformation. The non-linear transformation—called an activation function—is typically applied on the output of a perceptron.

A 'vanilla' version of a neural network algorithm consists of multiple layers of perceptrons. A neural network can learn the complex non-linear mapping between input and target variables by stacking multiple non-linear functions. A mathematical representation of a layer of a fully connected network can be written as:

Equation (1)

where fi (X) is the output of ith layer given input X; Wi and bi are the weight tensor and bias of ith layer respectively; and a(·) is a non-linear activation function.

The output of the last layer (fN (X)) is the prediction value given an input X, where N is the number of hidden layers of a model. A neural network with more than 2 hidden layers is considered a 'deep' network. The training of a neural network is done by finding the weights which minimize the distance between the prediction values and the ground-measured observations (distance is represented by a loss function).

Convolutional neural networks (CNNs) are a class of deep neural network commonly used for image analysis (Bishop 2006). CNNs take in an input image, assign importance to visual features in the image, and ultimately make predictions about the image's contents. They are typically used for image classification, object detection and segmentation (LeCun et al 2015), and have been used in several environmental applications including urban expansion detection (He et al 2019), biophysical modeling of crop growth (Lin et al 2020), and environmental and water management (Sun and Scanlon 2019).

CNNs differ from ANNs in the way in which the layers connect. Contrary to ANN, in a CNN, a neuron of a convolutional layer takes only a portion of the output of the previous layer as its input; this portion is called a receptive field. Within a layer, each neuron has a similar sized receptive field generated with the same weights and biases, called a kernel. Convolutional layers generally utilize a linear kernel followed by a non-linear activation. To generate the output of the layer, it slides the kernel throughout the input. This sliding is called a convolution operation. A formula of a 2D convolutional layer can be written as:

Equation (2)

where (x, y) is one entry in the output tensor of (i + 1)th layer, h is a two-dimensional kernel consisting of $|\{t_x\}|\times|\{t_y\}|$ weights (which are shared in the layer for every (x, y)), fi is the output of ith layer, and a(·) is an activation function.

As a way to maintain data dimensions, padding (usually zero padding) and stride values can be adjusted. Padding refers to adding extra (non-informative) pixels around the input images to keep size of output after a convolution, and stride refers to a step size that a kernel slides. A pooling layer, which performs a similar operation to a convolution but applies predefined summary function instead of convolution kernel, may also be applied after a convolutional layer to reduce dimension and reduce over-fitting. Two most common pooling methods are max-pooling and average-pooling. Max-pooling takes a maximum function for the summary function and average-pooling takes average function. As a result of this convolution operation, the (x, y)'s value of the output tensor can hold the information of multiple neighboring pixels in the input image (i.e. receptive field). By stacking multiple convolutional layers, each neurons of the output of the last convolutional layer encompasses a very large receptive field with respect to the input image.

3.2. Model specification

The CNN model structure used in our proposed C-GHI model is based on the VGG16 model (Simonyan and Zisserman 2014), with adjustments made for the number of layers and regularization parameters for each study site. We build separate models for each region and time horizon, using the same network architecture briefly described below. The convolutional layers receive the input image in series, each of which with filter size 32, 64, 64, 128, 128, and, 128. The convolutional kernel size is 3, and the stride is 1, in all layers. Max-pooling is applied after every other layer with kernel size 3, 3, and 2 respectively. After the convolutional layers, the data is passed through three fully connected layers—with 1024, 1024, and 1 nodes respectively. Rectified linear unit, a type of non-linear activation function, is also applied for every layer except for the last fully connected layer. Additional L2-regularization with respect to all weights is added to the loss function to reduce the risk of over-fitting. In short, the final loss function is $MSE + \lambda \sum\|W\|_2$, where MSE stands for mean squared error. Lastly, Adam Optimizer is utilized (Kingma and Ba 2015) to tune the model weights, given the input dataset.

3.3. Feature visualization

As a posterior analysis, we apply feature visualization techniques to the fitted models to visualize model performance at different spatial and temporal scales. Specifically, we harness saliency maps for the model inference. Saliency maps help visualize the gradient of the cost function with respect to each pixel of the input (Simonyan et al 2013). In other words, saliency maps involve calculating $[\partial E^j/\partial X]_{X = X^j}$, where Ej is the prediction error of jth observation, X is the variable indicating an image, and Xj is the jth image. The gradient is then projected onto the original image plane. Saliency maps (figure 4) show the effect of a unit change in a pixel on the prediction accuracy. If the gradient of a pixel is higher, it indicates that the pixel is more informative for the particular prediction.

Figure 4.

Figure 4. Saliency maps. From top to bottom, they are for 5 min, 1 h, 1 d ahead predictions. The four regions represent regions with the least correlation between each other. From (a) to (d), they are of region AU, BC, CL and ST respectively. In these images, lighter regions have a higher gradient, thus lighter regions have a higher contribution to the GHI prediction. The red triangles on the images represent the location of each region.

Standard image High-resolution image

3.4. Model performance

In our experiments, two model performance metrics are used: mean-squared error (MSE), and root mean-squared error (RMSE). For parameter tuning in fitting each model, we use MSE, mathematically represented as:

Equation (3)

where yi and $\hat{y}^i$ are the ground-measured and predicted GHI of ith observation respectively.

RMSE is used in comparing the predictive performance of different models. We report RMSE as it is a widely applied metric for solar irradiance prediction, particularly in exogenous predictive models. Additional performance metrics of MAPE, R2, and nRMSE are also calculated and included in the appendix tables A3A5.

Since two different GHI prediction models cannot be directly compared if the prediction regions, time, and temporal horizons do not match, and there is no agreed upon benchmark dataset, we compare our models' performance against the persistence model which is represented as:

Equation (4)

where $\hat{y}(t)$ is the predicted GHI at time t and y(t) is the observed GHI at time t, and T is the prediction interval. In essence, the persistence model assumes all future predictions will be equal to the last observed value at T time before.

4. Results and discussion

Comparing model performance of both the persistence and C-GHI prediction indicates that the C-GHI outperforms the persistence model in all but one location for day-ahead predictions, and in all hour-ahead predictions. The persistence model demonstrates better performance in 30 min predictions across all regions, which is not surprising given the temporal auto-correlation in GHI values. We hypothesize that the poor performance of the CNN-based model in the EP region 1 d ahead forecasting compared to the persistence is due to a 24 h lagged auto-correlation within the GHI data measured at the site. Results in units of RMSE are presented for all temporal horizons for four regions with minimal correlation in table 2, and a comparison between the C-GHI and persistence models are shown in table 3. Results for all locations are shown in appendix tables A6 and A7.

Table 2. Performance RMSE (W m−2) by prediction horizon and region. AU (the University of Texas Austin), BC (Bethune-Cookman College), CL (Lyndon B. Johnson Space Center), ST (Southwest Technology Development Institute). Prediction lead-times with the lowest RMSE for each location are bolded.

 AUBCCLST
Time (min)TrainTestTrainTestTrainTestTrainTest
583.49 190.06 134.62 166.01 154.35191.5979.40 141.60
10112.35195.58118.85171.34160.21 186.42 80.25143.36
15137.03195.42142.79168.19161.95188.1599.44145.26
20125.94197.90144.14166.90148.96193.36104.17147.22
25125.42198.64139.31169.80148.96193.36118.91153.43
30134.35199.23147.61172.49163.16195.01122.82157.45
60101.74191.71148.56177.26162.80189.4893.96158.20
1440139.23200.54159.27191.11182.45207.87137.42161.44

Table 3. Comparison in RMSE (W m−2) with persistence model as a benchmark.

 AUBCCLST
Time (min)PerstC-GHIPerstC-GHIPerstC-GHIPerstC-GHI
30147.55199.23160.65172.49164.28195.01126.83157.45
60199.97191.71215.84177.26213.98189.48198.44158.20
1440241.31200.54224.66191.11252.02207.87179.89161.44

These results indicate that the RMSE of both the C-GHI and persistence model generally increase as the time horizon increases. However, the C-GHI model's performance remains largely consistent in farther-ahead predictions across all regions. In contrast, the persistence model's performance decreases significantly as the time horizon increases. In the BS region, for example, the persistence model's day-ahead RMSE is 64% greater than the 30 min prediction. By contrast, the C-GHI model's RMSE increases only by 0.6%.

The C-GHI outperforms the persistence model by an average of 14.39% in hour-ahead prediction, and 13.35% in day-ahead prediction across all locations. This indicates that, within a single region, the prediction error increases slowly as the temporal prediction horizon increases when utilizing a satellite image based model. We also see comparable results between the C-GHI and other state-of-the-art exogenous models. Direct comparisons between GHI prediction models is not possible because of differences in prediction regions, temporal horizons, and available data. Accordingly, we utilize percentage improvement over a persistence model to compare modeling techniques with different temporal and spatial characteristics. Using this metric, C-GHI model's performance is on par with current state-of-the-art exogenous models' performance in terms of percentage improvements from the persistence model.

We compare the C-GHI to two other comparable exogenous GHI prediction models. The first model, proposed by Feng and Zhang (2020), utilizes a total-sky image (thus not fully exogenous) and is trained on 3 years of data at NREL's South table Mountain Campus, Golden, Colorado. The 1 h ahead prediction performance compared to persistence is 34.02%. The best C-GHI performance for 1 h ahead prediction is 24.26% at CN region. Qing and Niu (2018) proposed a 1 d ahead time-series model to predict GHI using weather forecast datasets. Their model for Santiago in Cape Verde utilizes 30 months of data, which is a similar data size to our dataset, and showed 30.68% performance improvement over the persistence model. This is comparable to our best 1 d ahead model in BS region, which showed 25.52% improvement.

4.1. Model visualization

We hypothesize that the model prediction quality is the result of the C-GHI locating the prediction region and adjusting the size and scope of the input data. That is, given the whole US satellite imagery and historical GHI data for a specific point, the learning process encourages the C-GHI to seek out the prediction location and a sufficient surrounding area to optimally map an input to an output.

To test this hypothesis, we leverage saliency maps (introduced in section 3.3) to understand the geographic regions which most contribute to the GHI prediction. An example is illustrated in figure 4. In these maps, lighter colors indicate regions of higher gradients with respect to the prediction and thus a greater contribution to GHI prediction. In each location, the brighter region becomes wider and more diffuse around the correct region as the prediction horizon grows.

4.2. Deterministic cropping

The C-GHI demonstrates high predictive accuracy when the entire CONUS imagery is utilized to predict local GHI values. In order to understand which areas of the country contribute to a particular regional GHI prediction, we perform the same analysis, but crop the input images to a specific window around the prediction location. We apply fixed-location deterministic cropping as a data pre-processing step, given the target prediction region's location. By fitting and comparing models for different cropping sizes and prediction horizons, we aim to understand the extent to which increased regional information improves prediction accuracy across different prediction horizons.

What follows is an example of the cropping procedure at the AU location for one channel. First, we calculate the location of the AU site (lat, lon) = (30.29 N, 97.74 W) within the satellite imagery. Each image is then centered around the AU site and cropped into squares with side lengths of: 10, 20, 40, 60, 100, 140, 180, 220, 260, 300, 360, 420, and 512 pixels. All cropped images are resized to 256 × 256 using bicubic interpolation to preserve the network architecture and maintain the number of parameters within the C-GHI. Finally, each model is separately trained per cropping window size and time horizon using the same dataset. We show the results of the predictions in figure 5.

Figure 5.

Figure 5. Model performance as a function of cropping window and temporal scale across multiple locations. Initial increases in cropping window significantly improve model performance, however, the improvements diminish as the window expands. The x-axes in the figure represent the side length of the original cropping window, and the y-axes represent RMSE. Dotted lines represent the performance of a model with no cropping. (data in table A1)

Standard image High-resolution image

Results indicate that prediction error decreases rapidly in 5 min and 1 h prediction as a result of increasing cropping window size when window size is small, and prediction error levels off as the window size becomes very large. Additionally, the effect of increasing the cropping window size become weaker in longer-term prediction. In figure 5, for 5 min ahead prediction in AU, the cropped-image model surpasses the error of full-size-image model at approximately 30 000 pixels—the result of a cropping window of 180 × 180. For 1 h ahead prediction, this threshold increases to 150 000 pixels (roughly 380 × 380). For 1 d ahead prediction, the threshold is approximately 230 000 pixels (480 × 480). As all cropped-image models surpass the predictive quality of full-size-image models, we conclude that regional image cropping is beneficial to GHI prediction, particularly in short-term prediction.

Additionally, we find that the predictive performance of models with noisy saliency maps (i.e. 5 min ahead prediction in AU and CL) is substantially improved when input images are cropped. We believe this is due to the model identifying the prediction location more accurately in longer temporal horizons and the the wider regional cloud information surrounding the target point which can be incorporated. For example, the 5 min ahead prediction in AU had an 8.2% improvement by the method at window size 300 × 300, while the 5 min ahead prediction in BC had only a 4.0% improvement even at 512 × 512. We can see that in figure 4 the saliency map of AU in the 5 min ahead prediction is noisier than that of BC. The ability of the CNN-based model to maintain predictive accuracy at increased time horizons combined with the benefits to model accuracy gained by cropping the input image reveals that the C-GHI is accurately capturing the regional meteorological patterns which contribute to GHI values. The cropping window size required to achieve similar performance to an uncropped model increases as the prediction horizon increases.

4.3. Weather condition effects

Lastly, we test the sensitivity of C-CHI's prediction performance to the degree of cloud cover, focusing on the best performing regions of ED and BC. ED, located in McAllan, Texas, has a semi-arid climate under the Köppen climate classification; it has very hot and humid summers and short and warm winters. BC, located in Daytona Beach, Florida, has a humid subtropical climate with warm and wet summers and cooler and drier winters. These two regions have similar climatological characteristics, but ED is drier and hotter.

To assess the sensitivity of model performance to degree of cloud cover, we first label the days in the model as 'sunny', 'partly cloudy', and 'cloudy'. The labels are chosen based on data from the Weather Underground (The Weather Underground 2020). We looked at daily rather than hourly weather data, following the convention in the solar irradiance prediction literature. The cloud cover labels are selected based on the following criteria: (i) 'sunny' represents days with more than 90% clear sky conditions, or with more than 80% clear skies, with the remaining being partly cloudy; (ii) 'cloudy' represents days with either no clear sky conditions, or when more than 70% of the day is mostly cloudy and/or cloudy; and (iii) 'partly cloudy' represent days that are neither completely sunny nor cloudy.

Table 4 shows the test set performance changes over different cloud cover conditions and prediction time horizons. Results indicate that as the prediction time horizon increases, C-GHI performs better than the persistence model, especially in cloudy and partly cloudy days. We find that persistence model performs well in day-ahead predictions during sunny days, possibly due to high auto-correlation in consecutive sunny days. We also find that predictions with longer lead times tend to overestimate irradiance in cloudy days and underestimate irradiance in sunny days to minimize the total sum of squared error (figure 6). We hypothesize that this helps C-CHI to better fit partly cloudy days which are most frequent in both regions, thus the overall performance benefits.

Figure 6.

Figure 6. Scatterplots of ground-measured and predicted GHI. The images in row (a) are for the ED region and the images in row (b) are for the BC region; from left to right, the images represent models' fits for sunny, partly cloudy, and cloudy days.

Standard image High-resolution image

Table 4. Performance changes of (a) ED and (b) BC regions depending on weather conditions. The percentage values in parentheses under each weather condition are the proportion of the type of weather.

 30 min1 h1 d
Weather  Improvement  Improvement  Improvement
conditionPerst.C-GHI(%)Perst.C-GHI(%)Perst.C-GHI(%)
Sunny (43%)83.10129.02−55.26167.18140.5415.93140.71180.62−28.36
Partly cloudy (45%)120.46184.15−52.87216.09172.6320.11222.31158.4528.73
Cloudy (12%)71.67187.07−161.03144.45228.20−53.98195.74138.2529.37
(a)
 30 min1 h1 d
Weather  Improvement  Improvement  Improvement
conditionPerst.C-GHI(%)Perst.C-GHI(%)Perst.C-GHI(%)
Sunny (33%)108.65131.63−21.15171.27128.9024.74138.02137.380.46
Partly cloudy (50%)167.25168.01−0.45222.50166.1325.33218.34176.6219.10
Cloudy (17%)147.17149.27−1.42197.19161.9017.90240.02177.5426.03
(b)

Assessing the C-CHI's prediction performance as a function of cloud types is not within the scope of this paper. However, it should be highlighted that future work analyzing the effect of cloud types on CNN-based solar irradiates prediction performance will be of great value to renewable energy planners, operators and regulators.

5. Conclusion

We propose a novel solar irradiance prediction model, using convolutional neural networks and cloud imagery. The proposed C-GHI is entirely exogenous, predicting solar irradiance based on freely-available satellite imagery of clouds. The model's performance is tested throughout various prediction horizons and compared with the persistence model across different regions. The experiment results show that the C-GHI model is highly generalizable and is robust against changes in prediction horizon and locations. The persistence model outperforms C-GHI for sub-hourly predictions, owing to the temporal auto-correlation in the GHI data. However, C-GHI's predictive performance remains steady with increasing lead times, while the performance of the persistence model decreases significantly as the prediction horizons increase. C-GHI outperforms the persistence model by up to 26% in day-ahead prediction.

Using saliency maps as a inferencing tool, we find that model's improved accuracy can be attributed to the holistic approach of the CNN which includes region specific information and surrounding regions' information which are particularly helpful for long-term prediction. Additionally, we test image cropping as a method for data preprocessing and find it improves model performance in all prediction horizons, and is extremely beneficial in short-term GHI prediction. When an appropriate window size is selected, the cropped model outperforms the model with full CONUS imagery.

While our proposed holistic approach yields high accuracy, there are additional avenues for model improvements, for example by incorporating information such as wind speed and direction. Including such environmental factors could lead to higher prediction accuracy while still providing a fully-exogenous GHI prediction framework. Future approaches could add such temporal information through the use of time-series methods such as recurrent neural networks (LeCun et al 2015). Moreover, future extensions of this work that accounts for the influence of cloud types, collected through ceilometers, on the performance of irradiance prediction models will of great interest to renewable energy policymakers and practitioners.

In summary, the improved accuracy in predicting GHI provided by the C-GHI combines the benefits of physical GHI prediction models—namely, no need for in-situ measurement—with the accuracy of statistical models. This flexibility can allow for a low-cost exploration of PV siting locations at both a household and utility scale. More accurate GHI forecasts can also help utility-scale integration of renewable energy into the electricity grid. The uncertainty in day-ahead renewable energy production is a major hurdle to the cost-effective usage of renewable energy, and improving the accuracy of solar energy production forecasts will be vital to effectively utilizing renewable resources.

Acknowledgments

The authors would like to acknowledge Purdue Climate Change Research Center (PCCRC), the Center for Environment (C4E) as well as the NSF Grants CRISP-1832688 and CMMI-1826161.

Data availability statement

The data that support the findings of this study are openly available at the following URL/DOI: www.nrel.gov/grid/solar-resource/confrrm.html.

Appendix.: Supplementary tables

Table A1. Cropped Models RMSE (W m−2) by prediction horizon and side length of the crop window. From (a) to (d), they are of region BC, AU, CL and ST respectively.

 Side length
Time (min)10204060100140180220260300360410512
5245.55238.95229.51218.62210.11204.82190.06181.72178.25174.44179.75182.28185.66
60239.94235.71223.92219.78216.65213.44214.41206.36197.68195.06192.82189.22187.33
1440259.53254.85250.46248.52235.19229.93220.79215.55205.42204.68201.31201.70199.71
(a)
Side length
Time (min)10204060100140180220260300360410512
5232.43223.02207.90189.01171.68167.82167.24167.43166.13165.26163.63159.51159.42
60259.63253.37238.60228.30222.53213.18207.65202.83200.38190.79187.23176.16172.77
1440251.73244.99238.04230.89223.92217.01209.25207.71206.07204.59200.08194.77190.48
(b)
Side length
Time (min)10204060100140180220260300360410512
5228.45223.90217.25209.68205.55199.71196.72194.41185.22181.06181.89184.54179.54
60241.55239.24235.36226.25211.18210.30208.09205.88202.30200.24198.84196.36192.46
1440232.59226.67224.36221.99219.58219.60213.39214.96212.75210.28209.64206.39202.98
(c)
Side length
Time (min)10204060100140180220260300360410512
5182.73175.24166.24165.10155.82153.44150.16148.23145.79144.57143.73143.30140.66
60219.14210.11199.20185.74179.89178.98176.13170.10167.77167.50163.53162.03157.62
1440196.52187.63184.25184.39173.17169.98164.59163.29160.51158.98159.78158.03155.35
(d)

Table A2. C-GHI: best performance regions per time horizon are presented (no cropping); (a): Ryu et al (2019) used CNN with Total-sky Imager data; (b): Qing and Niu (2018) used LSTM (time-series) with weather forecast data.

ModelTime (min)Data size (month)Persistence RMSEModel RMSEImprovement (%)
C-GHI (BC)3035160.65172.49−7.37
C-GHI (CN)6015109.6783.0624.26
C-GHI (BS)144036287.95214.4425.52
(a) (sunny)201/358186−220.68
(a) (cloudy)201/3121150−23.96
(a) (overcast)201/391111−21.97
(b)144030177.031122.717430.68
(b)1440132209.250976.24563.56

Table A3. MAPE by prediction horizon and region. FS (Florida Solar Energy Center), BC (Bethune-Cookman College), SS (Savannah State College in Georgia), MV (Savannah State College in Michigan), ST (Southwest Technology Development Institute), EC (Elizabeth City State University), AU (the University of Texas Austin), CN (West Texas A&M University), ED (University of Texas Pan America), EP (University of Texas an El Paso), CL (Lyndon B. Johnson Space Center), and BS (Bluefield State College).

 FSBCSSMVSTECAUCNEDEPCLBS
Time (min)TrainTestTrainTestTrainTestTrainTestTrainTestTrainTestTrainTestTrainTestTrainTestTrainTestTrainTestTrainTest
528.0232.6222.6929.2419.6630.7224.5530.7211.6721.9829.2635.7514.7336.4435.9341.2221.9033.8115.4531.9928.5636.1825.2339.23
1029.3633.5419.4229.0921.4230.8421.8230.3811.6422.1529.1835.9720.1637.6438.7542.2528.7534.9816.2232.0930.4136.1832.2541.46
1529.7433.9324.6229.6621.0030.2122.1931.6214.9822.7424.7335.6726.0838.4137.2141.9625.7033.8723.6732.7930.5336.0829.0339.31
2029.9832.7624.3629.6622.6931.3922.6431.1615.8522.9227.4535.1423.1438.8738.9744.5825.2634.2621.3231.7527.1336.4033.2938.67
2531.8434.4623.3229.4521.3229.9125.3631.5118.6524.1226.1436.6223.4239.1138.1644.0427.1335.2425.0433.2830.7036.2332.9040.38
30N/AN/A25.1029.9526.0031.4222.7931.5920.1925.0430.1138.1125.5439.1040.5644.7025.8435.0122.7534.5630.7337.6930.9640.11
6028.6733.6225.0230.1621.0132.4621.1030.2314.0622.3918.5038.9518.6237.0431.5040.4028.1134.5211.6530.7230.9436.8132.7142.80
144027.3735.6328.4034.3329.3734.8132.9235.8721.4524.8837.6345.6826.7338.5041.9960.5231.0735.2425.8033.7035.7839.8041.4547.75

Table A4. R-square by prediction horizon and region.

 FSBCSSMVSTECAUCNEDEPCLBS
Time (min)TrainTestTrainTestTrainTestTrainTestTrainTestTrainTestTrainTestTrainTestTrainTestTrainTestTrainTestTrainTest
50.73200.63220.80380.70700.85070.66090.79550.66760.92460.75940.75970.65280.91560.56520.65400.52610.82160.60200.90890.61090.69050.50100.82800.6188
100.71930.62800.84710.68790.82170.65540.82740.67630.92300.75350.75470.64250.84340.53150.58570.49440.82240.60200.88490.60330.66660.52750.74360.5981
150.71610.63390.77930.69930.82280.65360.82620.66500.88170.74690.81100.65160.76710.53210.62400.52310.76180.61240.79440.59760.65940.51920.77630.6091
200.69500.63260.77510.70390.80490.64070.81990.66050.87020.74000.77850.66060.80330.52000.59920.44900.77070.61680.81980.60350.71180.49230.71830.6304
250.67870.61280.79000.69350.81650.65870.78320.66090.83090.71750.79270.63420.80490.51640.60300.48230.73790.59560.77940.58970.65430.51210.72560.6072
30N/AN/A0.76410.68370.76050.64630.81760.65050.81960.70250.75320.62390.77620.51350.58060.47880.76110.59860.80400.57160.65770.48340.75020.6070
600.72620.64050.76250.67490.84270.63300.84980.69530.89300.70500.90270.60840.87160.54970.74650.52310.73280.61780.94310.59350.66300.52160.74250.5803
14400.75020.58460.72670.60250.69930.56550.66040.59810.77490.68630.62220.40460.76290.49370.50500.27770.68460.61270.76360.57620.56410.43410.60120.5020

Table A5. nRMSE by prediction horizon and region; range is used for dividends.

 FSBCSSMVSTECAUCNEDEPCLBS
Time (min)TrainTestTrainTestTrainTestTrainTestTrainTestTrainTestTrainTestTrainTestTrainTestTrainTestTrainTestTrainTest
514.0016.0611.3714.048.5312.7411.2013.956.4911.5810.4312.666.9815.898.7610.1011.4817.007.1514.5413.3116.539.6714.49
1014.3316.1610.0514.499.3312.8510.2913.776.5611.7310.5312.839.3916.359.5910.4314.3317.138.0414.6913.8216.0811.8014.89
1514.4116.0312.0814.239.3012.8810.3314.018.1311.889.2512.6711.4516.339.1310.1313.2716.8010.7514.8013.9716.2311.0214.68
2014.9316.0512.1914.129.7513.1210.5114.108.5212.0410.0112.5010.5316.549.4310.8913.0216.7110.0614.6912.8516.6812.3714.27
2515.3316.4811.7814.369.4612.7911.5314.109.7312.559.6812.9810.4816.609.3910.5413.9217.1711.1414.9714.0716.3512.2114.71
30N/AN/A12.4914.5910.8113.0210.5814.3010.0512.8810.5713.1711.2316.659.6510.6013.2917.1110.4915.2614.0016.8211.6514.71
6014.1016.1912.5815.018.9013.489.7713.587.9412.956.6713.498.5016.027.339.4813.2015.765.4814.3113.8216.3411.9715.44
144013.4617.3813.4716.1712.1214.4014.3715.6111.2413.2113.1516.0311.6416.7610.1314.7215.2017.2011.5315.1015.7417.9314.7316.70

Table A6. Performance RMSE (W m−2) by prediction horizon and region. Prediction lead-times with the lowest RMSE for each location are bolded.

 FSBCSSMVSTECAUCNEDEPCLBS
Time (min)TrainTestTrainTestTrainTestTrainTestTrainTestTrainTestTrainTestTrainTestTrainTestTrainTestTrainTestTrainTest
5160.92184.60134.62 166.01 110.63 165.19 133.01165.6379.40 141.60 121.23147.1683.49 190.06 76.7888.47123.36182.6290.10183.23154.35191.59124.15186.14
10164.69185.69118.85171.34120.89166.54122.20163.5180.25143.36122.49149.22112.35195.5884.0191.37153.88183.98101.31185.08160.21 186.42 151.57191.15
15165.61 184.19 142.79168.19120.53167.00122.63166.3499.44145.26107.52147.27137.03195.4280.0488.76142.57180.49135.46186.58161.95188.15141.56188.48
20171.65184.48144.14166.90126.46170.08124.81167.44104.17147.22116.39 145.34 125.94197.9082.6495.43139.86179.44126.81185.18148.96193.36158.87 183.26
25176.18189.38139.31169.80122.65165.78136.96167.39118.91153.43112.61150.89125.42198.6482.2792.39149.57184.47140.47188.66163.16189.56156.80188.90
30N/AN/A147.61172.49140.12168.76125.62169.79122.82157.45122.85153.15134.35199.2384.5892.92142.79183.77132.21192.35163.16195.01149.62188.92
60162.05186.06148.56177.26115.38174.74115.99 161.24 93.96158.2077.60156.81101.74191.7164.24 83.06 153.24 183.02 69.15 180.37 162.80189.48153.76198.22
1440154.69199.75159.27191.11157.09186.90170.60185.39137.42161.44152.87186.33139.23200.54105.2896.32163.22184.77145.39190.25182.45207.87189.20214.44

Table A7. Comparison in RMSE (W m−2) with persistent model as a benchmark.

 FSBCSSMVSTECAUCNEDEPCLBS
Time (min)PerstOurPerstOurPerstOurPerstOurPerstOurPerstOurPerstOurPerstOurPerstOurPerstOurPerstOurPerstOur
30N/AN/A160.65172.49143.45168.76141.03169.79126.83157.45116.69153.15147.55199.2365.6292.92165.94183.77129.94192.35164.28195.01160.65188.92
60228.41186.06215.84177.26202.51174.74193.50161.24198.44158.20171.24156.81199.97191.71109.6783.06217.45183.02216.08180.37213.98189.48208.58198.22
1440253.45199.75224.66191.11228.77186.90203.38185.39179.89161.44207.80186.33241.31200.54103.2296.32233.62184.77171.07190.25252.02207.87287.95214.44

Footnotes

  • 4  

     The data are flagged by CONFRRM. We filtered out the flagged data which fell outside a 4% confidence interval when compared to irradiance values as calculated based on a physical irradiance model.

Please wait… references are loading.