Accurate **rainfall** **prediction** is an important task in hydro- logical study. Conventional **time** **series** models or intelligent models have been used to perform this task. However, such models are difficult to interpret by human analyst because their mechanism is normally in parametric form. Furthermore, high dimensional input data could cause the model to be im- practical. This study proposed the use of **modular** model to address these problems for **monthly** **rainfall** **time** **series** pre- diction. **Fuzzy** **inference** **system** is used to capture input- output relationship of **rainfall** data whereas **nonlinear** optimi- zation technique is used to capture uncertainty in the **time** dimension. In term of **prediction** accuracy, eight **monthly** rain- fall **time** **series** from the northeast region of Thailand have been used to evaluate the proposed model. The experimental results showed that the proposed model provided higher pre- diction accuracy than autoregressive moving average and back-propagation neural network and the **modular** model without aggregation layer.

Show more
Shoba G. and Shobha G [1] analyzed various algorithms such as Adaptive Neuro-**Fuzzy** **Inference** **System** (ANFIS), ARIMA and SLIQ Decision for the forecast of **rainfall**. R.Sangari and M.Balamurugan [3] related data mining methods such as the K-Nearest Neighbor(KNN), Naïve Bayes, Decision Tree, Neural Networks, and **Fuzzy** Logic for the use of **rainfall** **prediction**. Beltrn-Castro[5] used decomposition and ensemble **techniques** to **rainfall** on daily basis. Ensemble Empirical Mode Decomposition (EEMD) is the decomposition technique adopted by Beltrn castro for dividing data into multiple segments. Few scholars like D. Nayak and A. Mahapatra [7] used different machine learning algorithms like MultiLayer Perceptron Neural Network (MLPNN), Back Propagation Algorithm(BPA), Radial Basis Function Network (RBFN), SOM (Self Organization Map) and SVM (Support Vector Machine) to predict **rainfall**. Their results have shown positive results on favor of back propagation algorithm. In [6] authors used different neural network models to predict **rainfall**. They have adopted feed forward neural network **using** back propagation, cascade-forward back propagation NN (CBPN), distributed **time** delay neural network (DTDNN) and **nonlinear** autoregressive exogenous network (NARX) and compared which gives best results.

Show more
ANN may provide considerable **prediction** accuracy and such a technique is easy to use and needs no prior knowledge of the **system** to establish the models. With these ad- vantages, it is reasonable to understand why ANN has gained momentum and interest from many researchers in hydrological **prediction** in recent decades. On the other hand, **fuzzy** logic (FL), which is categorized as the interpretable grey-box model, formulates the **system** knowledge with rules in a transparent way to interpretation and analysis, and leaves the **inference** mechanism as the opaque part. For this reason, researchers have started to look at the use of FL to handle the issue of accuracy and interpretability of the **rainfall** **prediction** models ( Asklany et al., 2011; Wong et al., 2003; Huang et al., 1998 ).

Show more
180 Read more

water resource planning and management. Conventional **time** **series** **prediction** models and intelligent models have been applied to this task. Attempt to develop better models is an ongoing en- deavor. Besides accuracy, the transparency and practicality of the model are the other important issues that need to be consi- dered. To address these issues, this study proposes the use of a **modular** technique to a **monthly** **rainfall** **time** **series** **prediction** model. The proposed model consists of two main layers, namely, a **prediction** layer and an aggregation layer. In the **prediction** layer, Mamdani-type **fuzzy** **inference** **system** is used to capture the input-output relationship of the **rainfall** pattern. In the ag- gregation layer, Bayesian learning and **nonlinear** programming are used to capture the uncertainty in the **time** dimension. Eight **monthly** **rainfall** **time** **series** collected from the northeast region of Thailand are used to evaluate the proposed model. The expe- rimental results showed that the proposed model could improve the **prediction** accuracy from the single model. Furthermore, human analysts can interpret such model as it contains set of **fuzzy** rules.

Show more
Thirumalai, Chandrasegar, et al. [1] discusses the amount of **rainfall** in past years according to the crop seasons and predicts the **rainfall** for future years. The crop seasons are Rabi, Kharif and Zaid. Linear regression method is applied for early **prediction**. Here, Rabi and kharif were taken as variables if one variable was given then other can be predicted **using** linear regression. Standard deviation and Mean was also calculated for future **prediction** of crop seasons. This implementation will be used for farmers to have an idea of which crop to harvest according to crop seasons. Geetha, A., and G. M. Nasira. [2] implements a model which predicts the weather conditions like **rainfall**, fog, thunderstorms and cyclones which will be helpful to the people to take preventive measures. Data mining **techniques** were used and a data mining tool named Rapid miner was used to model the decision trees. The data set of Trivandrum with attributes like day, temperature, dew point, pressure etc. The dataset is divided into training set and testing set and decision tree algorithm is applied. The accuracy is calculated, actual and predicted values are compared. The accuracy is 80.67 and to achieve high value it can be extended by applying soft computing **techniques** like **fuzzy** logic and genetic algorithms. Parmar, Aakash, Kinjal Mistree, and Mithila Sompura [3] discusses the different methods used for **rainfall** **prediction** for weather forecasting _______________________

Show more
Datasets for **rainfall** **prediction** downloaded from official website of National Oceanic and Atmospheric Administration (NOAA) maintained by US Department of Commerce. The NOAA is a scientific agency within the United States Department of Commerce focused on the conditions of the oceans and the atmosphere. NOAA warns of dangerous weather, charts seas and skies, guides the use and protection of ocean and coastal resources, and conducts research to improve understanding and stewardship of the environment. NOAA maintains NCEP/NCAR Reanalysis data. The NCEP/NCAR Reanalysis data set is a continually updating gridded data set representing the state of the Earth's atmosphere, incorporating observations and numerical weather **prediction** (NWP) model output dating back to 1948. It is a joint product from the National Centers for Environmental **Prediction** (NCEP) and the National Center for Atmospheric Research (NCAR). The National Centers for Environmental **Prediction** (NCEP) and National Center for Atmospheric Research (NCAR) have cooperated in a project called reanalysis to produce a retroactive record of more than 50 years of global analyses of atmospheric fields in support of NCEP/

Show more
Abstract :. This study covers inquisitive approach to develop an Artificial Neural Network (ANN) model to foresee sugarcane production. The various input data set comprises the agro-climatic and socio-economic factors influencing sugarcane production whereas the output is real sugarcane output. Different forecasting **techniques** have been deployed on the concept of **fuzzy** **time** **series** data, whereas precision has been a doubtful factor in this scenario. In this paper, On the basis of **fuzzy** **time** **series** (FTS) models performance analysis has been drawn. Data from Food Corporation of India have been gathered for performing the comparative study. The relativity of various FTS models have been suspiciously investigated on the production data on different agro products.

Show more
Steel mill industries rely on a number of rotating parts, i.e., slew bearings. These bearings support highly loaded rotation and operate at a very low speed. When unforeseen failure occurs, the steel mill industry may suffer from significant production loss. In order to predict unforeseen failure, a condition monitoring and prognosis method is required. This requirement is becoming difficult to fulfil without online real-**time** predictive analytics capable of delivering a reliable **prediction**. The **prediction** method for self-updating the model must be able to keep pace with non-stationary processes in typical steel mill industries due to the production target. Most processes are also subjected to a number of changing external variables. This trait cannot be handled by a static model, where its structure is fully determined in its initial design. A model is supposed to be flexible for new concepts which normally lead to the expansion of its initial structure. An over-complex structure adversely affects the model’s generalization because of overfitting. These research issues have led to algorithmic development of the so-called evolving intelligent systems (EISs) [2,3], which have attracted significant research interest over the past decade [4–7]. EISs have been successfully deployed in several predictive maintenance tasks [8–10].

Show more
23 Read more

onential smoothing was built on heuristics that had a good track record for generating reliable point predictions of demand in business applications. Heuristics also exist for determining the uncertainty surrounding predictions. s of uncertainty can be very important. **Rainfall** estimates are an important component of water resources applications, example, in designing drainage **system** and irrigation, an accurate estimate of **rainfall** is needed. There are also concerns with producing valid estimates **using** appropriate methods. In order to develop a comprehensive solution to the forecasting problem, including addressing the issue of uncertainty in predictions, a statistical model must be employed. One of the most successful ecasting methods in practice is based on exponential smoothing models. There are a variety of such models, each having the property that forecasts are weighted averages of past observations with recent observations given relatively more weight than older observations. The name “exponential smoothing” reflects the fact that the weights decrease exponentially as the observations get older (Hyndman, 2002). The main aim of this paper is to explore and INTERNATIONAL JOURNAL

Show more
The inevitable price to pay for increased flexibility are the additional parameters. One has to choose not only an appropriate shrinkage factor ν and stopping value M , but also a smoothing parameter λ and a number of evenly spaced knots. Schmid and Hothorn (2008) carried out an analysis of the effect of tuning parameters on the boosting performance. It is worth emphasizing the effect of λ for determining the degrees of freedom (df) of the weak learner. High values of λ lead to low degrees of freedom which is preferable in order to keep the learner highly biased but with a low variance. Schmid and Hothorn (2008) proposed df ∈ [3, 4] as a suitable amount for the degrees of freedom. We follow these prescriptions and remind that the reasonable altering of this parameter reflects solely in the computational **time**.

Show more
27 Read more

Forecasting of industrial production is based on the assumption that different lead- ing indicators should relate significantly and stably with the response, and therefore positively influence its **prediction**. However, there are many leading indicators that “claim” such an appealing property. Usually, one indicator is taken and its fore- casting potential is judged by a bivariate autoregressive model, e.g. Dreger and Schumacher (2005) compare four indicators. The additional dimension does not necessarily improve the forecasting quality, on the contrary, in case of an “inappro- priate” extra variable, it deteriorates it. In consequence, different studies provide surprisingly a large variety of controversial conclusions about the forecasting power of the indicators. Instead of focusing on the indicators’ **prediction** quality, we col- lect the nine most commonly used indicators and investigate how they affect the fitting. In other words, we will examine in this section, how redundant variables are considered from the fitting procedures. The aim is to gain knowledge, whether it is still possible to obtain good forecasts, despite the presence of probably inappropri- ate additional variables. Table 7 contains a list of the nine frequently used leading indicators on forecasting German IP (see Appendix A.1 for a detailed description of the indicators).

Show more
102 Read more

The hydrologic behavior of **rainfall**-runoff process is very complicated phenomenon which is controlled by large number of climatic and physiographic factors that vary with both the **time** and space. The relationship between **rainfall** and resulting runoff is quite complex and is influenced by factors relating the topography and climate. In recent years, artificial neural network (ANN), **fuzzy** logic, genetic algorithm and chaos theory have been widely applied in the sphere of hydrology and water resource. ANN have been recently accepted as an efficient alternative tool for modeling of complex hydrologic systems and widely used for **prediction**. Some specific applications of ANN to hydrology include modeling **rainfall**-runoff process. **Fuzzy** logic method was first developed to explain the human thinking and decision **system** by [1]. Several studies have been carried out **using** **fuzzy** logic in hydrology and water resources planning [2]. Adaptive neuro- **fuzzy** **inference** **system** (ANFIS) which is integration of neural networks and **fuzzy** logic has the potential to capture the benefits of both these fields in a single framework. ANFIS utilizes linguistic information from the **fuzzy** logic as well learning capability of an ANN. Adaptive neuro **fuzzy** **inference** **system** (ANFIS) is a **fuzzy** mapping algorithm that is based on Tagaki-Sugeno-Kang (TSK) **fuzzy** **inference** **system** [3] and [4]. ANFIS used for many applications such as, database management, **system**

Show more
One of the ma jor issues with medica l datasets is that they predominately consist of a set of imbalanced output classes [18], [19], [20]. Other issues pertaining to them is the presence of unfavorable feature to observation ratio, which usually occur when a small dataset has many features. This results in a training model that is not able to effectively learn resulting in a classifie r that performs poorly [21]. Due to this, **techniques**, called feature reduction **techniques**, that reduce the number of features without significantly min imizing the info rmation contained within the m are used to overcome issues pertaining to unbalanced datasets . They do this by finding patterns in high d imensional datasets. An exa mp le of one such technique is the statistical technique of Principa l Co mponent Analysis (PCA). PCA is based on the assumption that most informat ion of the dataset is covered in the d irections where the variat ions of the dataset are largest. Therefore to perform feature reduction, it creates standardized linear pro jections , called principa l components, that are linear co mbinations of the orig inal features and are directed at the direct ions where most variation of the data e xists, thereby reducing number of variab les [22]. Depending on how much variation needs to be preserved, the appropriate subspace covered by these principal co mponents is selected [16]. When applied to medical datasets, PCA he lps improve the generated training mode l. Another strength of PCA is in its ability to define the space of spread of the overall data. This helps in detecting and eventually eliminating outliers [23], a necessity in medical settings.

Show more
10 Read more

Ιν τερmσ οφ Θ τηε ϖαριανχε ιν τηε στατε εθυατιον ιτ ισ χεντερεδ αππροξιmατελψ οϖερ 0:1 ανδ, τηυσ, ωε αρε αλλοωινγ φορ εϖερψτηινγ φροm ϖερψ σmαλλ το mοδερατελψ λαργε σηιφτσ ιν τηε ΑΡ χοε′[r]

38 Read more

Like any other aspect of science and engineering developments, there has been a tremendous introduction of new concepts and ideas in **rainfall** cum precipitation study in general. Notable of such are researches in various directions including space-**time** structure and variability of **rainfall**. In this regard, there has been a significant shift from point process models to models based on concepts of scale invariance [3]. This is so because point process models suffer from the inability to describe the statistical structure of **rainfall** over a wide range of scales as well as from difficulty in parameter estimation; whereas scaling models provide parsimonious representations over a wide range of scales. These are supported by theoretical arguments and empirical evidence that **rainfall** exhibits a scale-invariant symmetry (e.g., [3] [4]). In this regard, the trend in scale-invariant **rainfall** models evolved around multiplicative cascades which have their origin in the statistical theory of turbulence [3]. How- ever, it is important to note that despite the good attributes, the estimation of parameters is not a simple issue [3]. As noted by Holley and Waymire [5], the independent and identically distributed “bounded generators” give rise to non-ergodic cascades. Recent developments in stochastic **rainfall** analysis in this direction deal with the in- troduction of wavelet transforms and importantly, the use of Artificial Neural Network, diffusion model (e.g., [6]), Markovian type models (e.g., [7] [8]) and Disaggregation models (e.g., [9]).

Show more
14 Read more

CPU technology is slowly reaching its threshold, however Moore’s Law still holds true for GPUs. With the increasing scope for GPGPU computing more and more applications are being ported to the GPU framework. One of the most suited application areas for GPGPU computing is image processing and computer vision. The high performance given by GPUs makes them ideal for real **time** applications. However, GPU technology gives optimum results when certain criteria related to degree of parallelism, image size and memory transfers are met. Very small images will consume more **time** in memory transfers between CPU and GPU than in computation on the GPU, while large images will affect the response **time** owing to the increased computation. It is necessary to strike a fine balance between the image size and the computation **time**. We propose to use **Fuzzy** **Inference** **System** to estimate the most suitable values for these parameters and to show the difference between CPU and GPU computing methods. **Using** these values from the FIS, a programmer can develop deeper insights into the performance of real-**time** systems **using** GPUs.

Show more
be shown that the mean of g(z) interpolates between the two neighboring values of with weights inversely related to the distance between neighboring points and z. The variance shares similar properties and can be large if the “distance”between observations is large. In general, arguments such as this show how our approach is almost nonparametric in spirit, providing much more ‡exible and robust predictions than standard parametric approaches. Another advantage of our modeling framework is that it is simple, being based on the standard state space model. Thus, textbook methods for estimation, model comparison and **prediction** are available. In this paper, we apply Bayesian methods **using** a Markov Chain Monte Carlo (MCMC) algorithm for posterior simulation. Conditional on a particular ordering of the data (i.e. conditional on ) and a particular distance function, posterior simulation is easy, involving commonly-known Bayesian simulation methods for state space models (e.g. Durbin and Koopman, 2002 or DeJong and Shephard, 1995) and simulation methods for stochastic volatility (e.g. Kim, Shephard and Chib, 1998). Thus, we only require a method for drawing from the posterior for and the parameters of the distance function (conditional on the other parameters in the model). This method is supplied in the following section.

Show more
38 Read more

Neuro-**Fuzzy** Systems (NFS) are computational intelligence tools that have recently been employed in hydrological modeling. In many of the common NFS the learning algorithms used are based on batch learning where all the parameters of the **fuzzy** **system** are optimized off-line. Although these models have frequently been used, there is a criticism on such learning process as the number of rules are needed to be predefined by the user. This will reduce the flexibility of the NFS architecture while dealing with different data with different level of complexity. On the other hand, online or local learning evolves through local adjustments in the model as new data is introduced in sequence. In this study, dynamic evolving neural **fuzzy** **inference** **system** (DENFIS) is used in which an evolving, online clustering algorithm called the Evolving Clustering Method (ECM) is implemented. ECM is an online, maximum distance-based clustering method which is able to estimate the number of clusters in a data set and find their current centers in the input space through its fast, one-pass algorithm. The 10-minutes **rainfall**-runoff **time** **series** from a small (23.22 km 2 ) tropical catchment named Sungai Kayu Ara in Selangor, Malaysia, was used in this study. Out of

Show more
P.Samuel Quinan, Miriah Meyer et al [2016],Meteorologists evolution and analyze climate forecasts **using** hallucination in command to observe the behaviours of and associations with climate features. In this intend study conducted with meteorologists in result carry roles, we recognized and attempted to deal with two significant frequent challenges in weather apparition: the employment of conflicting and repeatedly unsuccessful visual encoding practices cross ways a large range of visualizations, and a lack of hold up for straight visualizing how diverse weather description narrate across an collection of potential forecast outcomes. In this effort, present a classification of the exertion and data connected with meteorological forecasting, we intend a set of conversant default programming choices that incorporate existing meteorological conventions with efficient visualization preparation, and we make longer a set of **techniques** as a primary step toward honestly visualizing the communications of numerous features over an ensemble forecast. We converse the incorporation of these charity keen on a purposeful prototype tool, and as well as imitate on the numerous sensible challenges that occur when working with weather data.

Show more