Sabtu, 30 Juni 2018

Sponsored Links

What Is Predictive Analytics? A Definitive Guide For B2B Marketers
src: radius.com

Predictive analysis includes various statistical techniques from predictive modeling, machine learning, and data mining that analyze current and historical facts to make predictions about future or unknown events.

In business, predictive models make use of the patterns found in historical and transactional data to identify risks and opportunities. The model captures the relationship among many factors to allow for a risk assessment or potential associated with a particular set of conditions, guiding decision-making for candidate transactions.

What defines the functional effect of this technical approach is that predictive analysis provides a predictive score (probability) for each individual (customer, employee, health care patient, product SKU, vehicle, component, machine, or other organizational unit) to determine, inform, or affecting organizational processes related to a large number of individuals, such as in marketing, credit risk assessment, fraud detection, manufacturing, health care, and government operations including law enforcement.

Predictive analysis is used in actuarial science, marketing, financial services, insurance, telecommunications, retail, travel, mobility, health care, child protection, pharmacy, capacity planning, social networking, and other fields.

One of the most popular applications is credit scoring, which is used across all financial services. Scoring models process customer credit histories, loan applications, customer data, etc., to rank individuals with the likelihood that they make future credit payments on time.


Video Predictive analytics



Definisi

Predictive analysis is a statistical field relating to extracting information from data and using it to predict trends and behavior patterns. Often interesting unknown events exist in the future, but predictive analysis can be applied to any kind of unknown whether it is in the past, present or future. For example, identifying a suspect after a crime was committed, or credit card fraud when it occurred. The essence of predictive analysis depends on capturing the relationship between explanatory variables and predicted variables of past events, and exploiting them to predict unknown results. It is important to note, however, that the accuracy and usefulness of the results will largely depend on the level of data analysis and assumption quality.

Predictive analysis is often defined as predicting at a more granular level of detail, producing a predictive score (probability) for each individual organizational element. It distinguishes it from forecasting. For example, "Predictive Analysis - Technology that learns from experience (data) to predict future behavior of individuals to encourage better decisions." In future industrial systems, the predictive analytic value is to predict and prevent potential problems to achieve near-zero break-down and then be integrated into prescriptive analysis for decision optimization. Furthermore, converted data can be used to improve the lifecycle of closed-loop products that is the vision of the Internet Industry Consortium.

Maps Predictive analytics



Predictive Analytics Process

  1. Define Project : Determine the results of the project, which can be submitted, the scope of business, the business objectives, identify the data set to be used.
  2. Data Collection : Data mining for predictive analysis prepares data from multiple sources for analysis. It provides a complete view of customer interaction.
  3. Data analysis is the process of checking, cleaning, and modeling data in order to find useful information, to the conclusion of
  4. Statistics Ã,: Statistical Analysis allows to validate assumptions, hypotheses, and test them using standard statistical models.
  5. Model Ã,: Predictive modeling provides the ability to automatically create accurate predictive models of the future. There is also the option to choose the best solution with a multi-modal evaluation.
  6. Deployment : The application of the predictive model provides an option to apply analytical results into the daily decision-making process to get results, reports, and outputs by automating decisions based on modeling.
  7. Model Monitoring Ã,: Models are managed and monitored to review model performance to ensure that it delivers the expected results.

Predictive Analytics: What It Is & How it Works | Business Impact
src: businessimpactinc.com


Type

Generally, predictive analytic terms are used to mean predictive modeling, "scoring" data with predictive models, and forecasting. However, people increasingly use the term to refer to related analytical disciplines, such as modeling and decision-making modeling or descriptive optimization. This discipline also involves rigorous data analysis, and is widely used in business for segmentation and decision-making, but has different purposes and the statistical techniques underlying them vary.

Predictive model

A predictive model is a model of the relationship between a unit's specific performance in a sample and one or more of the known attributes or features of the unit. The purpose of the model is to assess the likelihood that similar units in different samples will show specific performance. This category includes models in many areas, such as marketing, where they search for fine data patterns to answer questions about customer performance, or fraud detection models. Predictive models often perform calculations during direct transactions, for example, to evaluate risks or opportunities from customers or certain transactions, to guide decisions. With advances in computing speed, individual agent modeling systems have become able to simulate human behavior or reactions to stimuli or given scenarios.

The sample units available with known attributes and known performances are referred to as "training samples". Units in other samples, with known but unknown performance attributes, are referred to as "out of [training] sample" units. Out of the sample unit does not necessarily have a chronological relationship with the sample training unit. For example, training examples may consist of literary attributes of writings by Victorian authors, with known attributions, and outgoing sample units may be newly discovered writings with unknown authorship; Predictive models can help connect work with known authors. Another example is given by the blood splatter analysis in a simulated crime scene in which the sample unit came out was the actual blood splatter pattern of the scene. The outgoing sample unit may be from the same time as the training unit, from the previous time, or from the future time.

Descriptive model

Descriptive models quantify relationships in the data in a way that is often used to classify customers or prospects into groups. Unlike predictive models that focus on predicting single customer behavior (such as credit risk), the descriptive model identifies many different relationships between customers or products. Descriptive models do not rank customers with the possibility of taking certain actions as predictive models do. Instead, descriptive models can be used, for example, to group customers based on their product preferences and life stages. Descriptive modeling tools can be used to develop further models that can simulate a large number of individual agents and make predictions.

Decision model

The decision model describes the relationship between all decision elements - known data (including results from prediction models), decisions, and decision-making results - to predict decision results involving multiple variables. These models can be used in optimization, maximizing certain results while minimizing others. Decision models are commonly used to develop decision logic or a set of business rules that will result in the desired action for each customer or circumstance.

Big data Consulting Services, Big Data predictive analytics ...
src: www.snovasys.com


Apps

Although predictive analysis can be used in many applications, we outline some examples where predictive analysis has shown a positive impact in recent years.

Analytical customer relationship management (CRM)

Analytical relationship relationship management (CRM) is a commercial application that is often used for predictive analysis. Predictive analysis methods are applied to customer data to pursue CRM goals, which involves building a holistic view of customers no matter where their information is located in the company or department involved. CRM uses predictive analytics in applications for marketing, sales, and customer service campaigns to name a few. These tools are needed so that companies can posture and focus their efforts effectively across the breadth of their customer base. They must analyze and understand products on demand or have high demand potential, predict customer buying habits to promote relevant products at multiple contact points, and proactively identify and mitigate potential losses to customers or reduce their ability to earn new ones. Analytical customer relationship management can be implemented throughout the customer lifecycle (acquisition, relationship growth, retention, and win-back). Some of the application areas described below (direct marketing, cross selling, customer retention) are part of customer relationship management.

Child protection

Over the past 5 years, several child welfare agencies have begun using predictive analysis to mark high-risk cases. This approach has been called "innovative" by the Commission for the Elimination of Child Abuse and Cause (CECANF), and in Hillsborough County, Florida, where child welfare institutions lead using predictive modeling tools, no child abuse is associated with abuse. in the target population in this paper.

Clinical decision support system

Experts use predictive analysis in health care primarily to determine which patients are at risk of developing certain conditions, such as diabetes, asthma, heart disease, and other lifelong illnesses. In addition, sophisticated clinical decision support systems combine predictive analysis to support medical decision-making at the point of care. The definition of work has been proposed by Jerome A. Osheroff and colleagues: Clinical decision support ( CDS ) provides doctors, staff, patients, or other individuals with specialist knowledge and information, is intelligently screened or presented at the right time, to improve health and health care. It includes tools and interventions such as computerized warnings and reminders, clinical guides, sequence orders, patient data reports and dashboards, documentation templates, diagnostic support, and clinical workflow tools .

A 2016 study of neurodegenerative disorders provides a powerful example of a CDS platform to diagnose, track, predict and monitor the development of Parkinson's disease. Using large and multi-source imaging, genetics, clinical and demographic data, these researchers developed decision support systems that can predict disease states with high accuracy, consistency and precision. They used classical models and machine-free learning methods to differentiate between patients and different control groups. A similar approach can be used for predictive diagnosis and disease forecasting in many neurodegenerative disorders such as Alzheimer's, Huntington, Amyotrophic Lateral Sclerosis, as well as for other clinical and biomedical applications where Big Data is available.

Collecting analytics

Many portfolios have a set of arrears customers who do not make timely payments. The financial institution must conduct collection activities on this customer to return the amount due. Many resource collections are wasted on customers who are difficult or impossible to recover. Predictive analysis can help to optimize the allocation of collection resources by identifying the most effective collection agencies, contact strategies, legal actions, and other strategies for each customer, significantly improving recovery while reducing collection costs.

Cross-sell

Often corporate organizations collect and store abundant data (eg customer records, sales transactions) because exploiting hidden relationships in data can provide a competitive advantage. For organizations that offer multiple products, predictive analytics can help analyze spending, use, and other customer behavior, leading to efficient cross selling, or selling additional products to customers today. This directly leads to higher profitability per customer and stronger customer relationships.

Customer retention

With the number of competing services available, businesses need to focus on maintaining sustainable customer satisfaction, respecting consumer loyalty and minimizing customer attrition. In addition, a small increase in customer retention has been shown to increase profits disproportionately. One study concluded that a 5% increase in customer retention rates would increase profits by 25% to 95%. Businesses tend to respond to customer reactive attractively, acting only after the customer starts the process of discontinuing the service. At this stage, the opportunity to change the customer's decision is almost zero. Appropriate predictive analytic applications can lead to a more proactive retention strategy. With frequent checks on past customer service usage, service performance, spending, and other behavior patterns, predictive models can determine the likelihood of customers stopping services in the near future. Interventions with favorable bids can increase the chances of retaining customers. Silent shifts, customer behavior to slowly but continue to reduce the use, is another problem facing many companies. Predictive analysis can also predict this behavior, so companies can take appropriate action to improve customer activity.

Direct Marketing

When marketing consumer products and services, there are challenges to competing with competing products and consumer behavior. In addition to identifying prospects, predictive analysis can also help identify the most effective combination of product versions, marketing materials, communication channels, and the time that should be used to target the given consumer. The purpose of predictive analysis is usually to lower the cost per order or cost per action.

Fraud detection

Fraud is a big problem for many businesses and can be of various types: inaccurate credit applications, fraudulent transactions (both offline and online), identity theft, and fake insurance claims. These problems disrupt companies of all sizes in many industries. Some examples of possible casualties are credit card issuers, insurance companies, retail traders, producers, business suppliers to businesses and even service providers. Predictive models can help get rid of "bad" and reduce business exposure to fraud.

Predictive modeling can also be used to identify high-risk fraud candidates in business or the public sector. Mark Nigrini developed a risk assessment method to identify audit targets. He explains the use of this approach to detect fraud in the franchisees' sales report of the international fast food chain. Each location was assessed using 10 predictors. The 10 scores are then weighed to provide an overall final risk score for each location. The same assessment approach is also used to identify high-risk checkpoint accounts, potentially deceptive travel agents, and questionable vendors. Moderately complex models are used to identify false monthly reports sent by division controllers.

The US Internal Revenue Service (IRS) also uses predictive analysis to obtain tax returns and identify tax fraud.

Recent advances in technology have also introduced predictive behavioral analysis for web fraud detection. This type of solution uses heuristics to study normal web user behavior and detects anomalies that indicate fraudulent attempts.

Portfolio, product, or economic prediction

Often the focus of analysis is not consumer but product, portfolio, company, industry or even economy. For example, a reseller might be interested in predicting store-level demand for inventory management purposes. Or the Federal Reserve Board may be interested in predicting the unemployment rate for next year. This type of problem can be overcome by predictive analysis using time series techniques (see below). They can also be addressed through a machine learning approach that converts the original time series into a feature vector space, where learning algorithms find patterns that have predictive power.

Project risk management

When using risk management techniques, the results always predict and utilize future scenarios. The capital asset pricing model (CAP-M) "predicts" the best portfolio to maximize profits. Probabilistic risk assessment (PRA) when combined with mini-Delphi techniques and statistical approaches produces accurate estimates. This is an example of a broadening approach from project to market, and from near to long term. Underwriting (see below) and other business approaches identify risk management as a predictive method.

Underwriting

Many businesses should take into account the risk exposure due to their different services and determine the costs required to cover the risks. For example, car insurance providers need to accurately determine the amount of premiums to be paid to cover every car and driver. Financial firms need to assess potential borrowers and paying ability before lending. For health insurance providers, predictive analysis can analyze several years of past medical claims data, as well as laboratories, pharmacies, and other records where available, to predict how expensive an enrolle will be in the future. Predictive analysis can help guarantee this quantity by predicting possible illness, default, bankruptcy, etc. Predictive analysis can streamline customer acquisition processes by predicting future risk behavior from customers using application-level data. Predictive analysis in the form of a credit score has reduced the amount of time required for loan approval, especially in the mortgage market where lending decisions are now made in hours, not days or even weeks. Precise predictive analytics can result in precise pricing decisions, which can help reduce the risk of future defaults.

Deploying Predictive Analytics in Healthcare
src: www.healthcatalyst.com


Technology and influence big data

Large data is a collection of data that is so large and complex that they become awkward to work using traditional database management tools. The volume, variety, and speed of large data has introduced challenges across the board for retrieval, storage, search, sharing, analysis, and visualization. Examples of large data sources include web logs, RFID, sensor data, social networking, Internet search index, call detail records, military surveillance, and complex data in astronomy, biogeochemistry, genomics and atmospherics. Big Data is at the core of most of the predictive analytics services offered by IT organizations. Thanks to technological advances in computer hardware - faster CPU, cheaper memory, and MPP architecture - and new technologies like Hadoop, MapReduce, and in-database and text analysis for processing large data, it is now feasible to collect, analyze, and mine large amounts of structured and unstructured data for new insights. It is also possible to run prediction algorithms on streaming data. Today, exploring large data and using predictive analysis is within the reach of the organization more than ever and new methods capable of handling the dataset are proposed.

Predictive Analytics For E-commerce Success
src: www.digitalistmag.com


Analytical techniques

The approaches and techniques used to conduct a broad predictive analysis can be grouped into regression techniques and machine learning techniques.

Regression Technique

The regression model is a predictive analytic mainstay. The focus lies in establishing a mathematical equation as a model to represent the interaction between the different variables in consideration. Depending on the situation, there are various models that can be applied when doing predictive analysis. Some of them are briefly discussed below.

Linear regression model

The linear regression model analyzes the relationship between the response or the dependent variable and a set of independent variables or predictor variables. This relationship is expressed as an equation that predicts the response variable as a linear function of the parameter. These parameters are adjusted so that the fit size is optimized. Most attempts at model installation are focused on minimizing residual size, as well as ensuring that it is distributed randomly with respect to model predictions.

The purpose of the regression is to select the model parameters so as to minimize the number of residual squares. This is called the simplest least-squares estimation (OLS) and yields the best linear unbiased estimation (BIRU) of the parameters if and only if the Gauss-Markov assumption is met.

Once the model has been estimated, we will be interested to know whether the predictor variable belongs to the model - that is. is the estimated contribution of each variable reliable? To do this we can examine the statistical significance of the model coefficients that can be measured using t-statistics. This number is to test whether the coefficient is significantly different from zero. How well the model predicts the dependent variable based on the value of the independent variable can be assessed using the RÃ,² statistic. It measures the model's predictive power, ie the proportion of total variation in the dependent variable "explained" (taken into account) by the variation in the independent variable.

Discrete option model

Multiple regression (above) is commonly used when the response variables are continuous and have unlimited ranges. Often response variables may not be continuous but rather discrete. Although it is mathematically feasible to apply multiple regressions to separate ordered sequential variables, some of the assumptions behind multiple linear regression theory are no longer valid, and there are other techniques such as discrete option models that are more suited to this type of analysis. If the dependent variable is discrete, some of the leading methods are logistic regression, multinomial logit, and probit model. Logistic regression and probit model are used when the dependent variable is binary.

Logistic regression

In the classification setting, determining the probability of results for observation can be achieved through the use of the logistics model, which is basically a method that converts information about binary dependent variables into unlimited continuous variables and estimates regular multivariate models (See Allison Logistic Regression > for more information on logistic regression theory).

The Wald and probability-ratio tests are used to test the statistical significance of each coefficient b in the model (analogous to the t test used in the OLS regression; see above). A test that assesses the virtues of a classification model is a "predictable percentage".

Multinomial logistic regression

An extension of the binary logit model for cases where the dependent variable has more than 2 categories is a multinomial logit model. In such cases, breaking down data into two categories may not make sense or may result in loss of data richness. The multinomial logit model is the appropriate technique in this case, especially when the dependent variable category is not ordered (for example colors like red, blue, green). Some authors have expanded the multinomial regression to include features of important options/methods such as random multinomial logit.

Probit regression

The probit model offers an alternative to logistic regression for modeling categorical dependent variables. Although the results tend to be similar, the underlying distributions are different. Popular probit models in social sciences such as economics.

A good way to understand the key difference between the probit and logit model is to assume that the dependent variable is driven by the latent variable z, which is the sum of the linear combination of explanatory variables and the term random noise.

We did not observe z but observed y taking 0 (when z & lt; 0) or 1 (if not). In the logit model we assume that the term random noise follows a logistic distribution with zero average. In the probit model, we assume that it follows a normal distribution with a mean of zero. Note that in the social sciences (eg economics), probes are often used to model situations where the observed variable is continuous but takes values ​​between 0 and 1.

Logit versus probit

The probit model has existed longer than the logit model. They behave similarly, except that the logistic distribution tends to be a slightly flat tail. One of the reasons the logit model is formulated is that the probit model is very difficult because of the integral need of numerical computation. However modern computing has made this calculation quite simple. The coefficients obtained from the logit and probit models are quite close. However, the odds ratio is more easily interpreted in the logit model.

The practical reasons for choosing a probit model over logistics models are:

  • There is a strong belief that the underlying distribution is normal
  • The actual event is not a binary result ( for example, , bankruptcy status) but proportion ( eg. , proportion of population at different debt levels).

Time series model

The time series model is used to predict or forecast the behavior of future variables. These models explain the fact that data points taken from time to time may have internal structures (such as autocorrelation, trends or seasonal variations) to be accounted for. As a result, standard regression techniques can not be applied to time series data and methodologies have been developed to describe the trend, seasonality and cycle components of the series. Modeling the dynamic path of a variable can increase the estimate because predictable components of the series can be projected into the future.

The time series model estimates the difference equations that contain stochastic components. Two commonly used forms of this model are the autoregressive model (AR) and the moving average (MA) model. Box-Jenkins methodology (1976) developed by George Box and G.M. Jenkins incorporates AR and MA models to produce an ARMA (autoregressive moving average) model, which is the basis of stationary time series analysis. ARIMA (autoregressive integrated moving average model), on the other hand, is used to describe non-stationary time series. Box and Jenkins suggest to distinguish non-stationary timing circuits to obtain stationary circuits that can be used by ARMA models. Non-stationary time series has a spoken trend and does not have a constant average or long-term variant.

Box and Jenkins proposed a three-stage methodology involving model identification, estimation and validation. The identification stage involves identification if the series is stationary or not and there is a seasonality by examining the plot of the series, autocorrelation and partial autocorrelation functions. In the estimation stage, the model is estimated to use a non-linear time series or a maximum probability estimation procedure. Finally the validation stage involves diagnostic checks such as planning the residue to detect the evangelism and proof of suitability of the model.

In recent years the time series model has become more sophisticated and tries to model conditional heteroscedasticity with models such as ARCH (autoregressive conditional heteroskedasticity) and GARCH (condorative heteroscedasticity autoregressive model) which is often used for financial time series. In addition, the time series model is also used to understand the relationships between economic variables represented by the system of equations using VAR (autoregression vector) and structural VAR model.

Survival or duration analysis

Survival analysis is another name for time-to-event analysis. These techniques are mainly developed in medicine and biology, but they are also widely used in social sciences such as economics, as well as in engineering (reliability and time failure analysis).

Censoring and non-normality, which is a characteristic of survival data, results in difficulties when attempting to analyze data using conventional statistical models such as multiple linear regression. Normal distribution, as a symmetrical distribution, takes positive and negative values, but duration by its nature can not be negative and therefore normality can not be assumed when faced with duration/survival data. Therefore the assumption of normality of the regression model is violated.

The assumption is that if the data is not censored it will represent the population of interest. In survival analysis, the censored observation appears whenever the dependent interest variable represents time to the terminal event, and the duration of the study is limited in time.

An important concept in survival analysis is the degree of danger, which is defined as the probability that events will occur at conditional t times on which lasts till time. Another concept related to hazard level is the function of survival which can be defined as the probability of survival until time t.

Most models try to model the degree of danger by selecting the underlying distribution depending on the form of the hazard function. A distribution with a dangerous upward-sloping function is said to have positive duration dependence, a decreasing hazard indicates negative duration dependence whereas constant danger is a memoryless process typically characterized by an exponential distribution. Some distribution options in the survival model are: F, gamma, Weibull, normal log, normal inverse, exponential, etc. All of these distributions are for non-negative random variables.

The duration model can be either parametric, non parametric or semi-parametric. Some commonly used models are Kaplan-Meier and Cox proportional hazard model (non parametric).

Tree classification and regression (CART)

Analysis of optimal global classification tree (GO-CTA) (also called hierarchical optimal discriminant analysis) is a generalization of optimal discriminant analysis that can be used to identify statistical models that have maximum accuracy to predict the value of categorical dependent variables for datasets consisting of categorical and continuous variables. The output of HODA is a non-orthogonal tree that combines the category and cutoff variables for continuous variables that produce maximum predictive accuracy, an assessment of the exact Type I error level, and evaluation of the cross-generalizability potential of the statistical model. The optimal hierarchical discriminant analysis can be considered a generalization of Fisher's linear discriminant analysis. Optimal discriminant analysis is an alternative to ANOVA (analysis of variance) and regression analysis, which attempts to express a dependent variable as a linear combination of other features or measurements. However, the ANOVA and regression analysis provide the dependent variable which is a numerical variable, while the optimal hierarchical discriminant analysis provides the dependent variable which is the class variable.

Tree classification and regression (CART) is a nonparametric decision tree learning technique that produces either classification or regression trees, depending on whether the dependent variable is categorical or numerical, respectively.

Decision trees are formed by a set of rules based on variables in the data set modeling:

  • Rules based on variable values ​​are selected to get the best separation to distinguish observations based on the dependent variable
  • Once the rule is selected and divides the node into two, the same process is applied to each "derived" node (ie a recursive procedure)
  • Separation stops when CART detects that no further benefits can be made, or some pre-defined stoppage rules are met. (Or, the data is shared as much as possible and then the tree is then trimmed.)

Each tree branch ends at the terminal node. Each observation falls into one and precisely one terminal node, and each terminal node is uniquely determined by a set of rules.

A very popular method for predictive analysis is the random forest of Leo Breiman.

Multivariate adaptive regression regression

Multivariate adaptive regression splines (MARS) is a non-parametric technique that builds flexible models by adjusting piecewise linear regression.

An important concept associated with regression splines is a node. Node is where one local regression model gives way to another and thus is the point of intersection between two splines.

In multivariate and adaptive regression splines, the base function is a tool used for generalizing the search of knots. The base function is a set of functions used to represent information contained in one or more variables. Multivariate and Adaptive Regression Models Regression almost always creates basic paired functions.

The multivariate and adaptive regression spline approach deliberately adjusts the model and then trims to get the optimal model. The algorithm is very intensive and in practice we are required to define the upper limit of the number of base functions.

Machine learning techniques

Machine learning, the branch of artificial intelligence, was originally used to develop techniques that allowed computers to learn. Today, as it includes a number of advanced statistical methods for regression and classification, it finds applications in various fields including medical diagnostics, credit card fraud detection, facial recognition and speech and stock market analysis. In certain applications it is sufficient to directly predict the dependent variable without focusing on the underlying relationship between the variables. In other cases, the underlying relationship can be very complicated and the mathematical form of the dependencies is unknown. In such cases, machine learning techniques mimic human cognition and learn from training examples to predict future events.

A brief discussion of some of the methods commonly used for predictive analysis is provided below. Detailed study of machine learning can be found at Mitchell (1997).

Neural network

Neural network is a sophisticated nonlinear modeling technique capable of modeling complex functions. They can be applied to problems of prediction, classification or control in a wide spectrum of fields such as finance, cognitive/neuroscience psychology, medicine, engineering, and physics.

The neural network is used when the nature of the relationship between input and output is unknown. The main feature of neural networks is that they study the relationship between input and output through training. There are three types of training used by different neural networks: supervised and unattended training and strengthening learning, with supervision being the most common.

Some examples of neural network training techniques are backpropagation, rapid propagation, conjugate gradients, projection operators, Delta-Bar-Delta, etc. Some unattended network architectures are multilayer perceptrons, Kohonen networks, Hopfield networks, etc.

Multilayer perceptron (MLP)

The multilayer perceptron (MLP) consists of input and output layers with one or more hidden layers of nonlinearly-activated nodes or sigmoid nodes. This is determined by the weight vector and it is necessary to adjust the network weight. Backpropagation uses a gradient fall to minimize the squared error between the value of the network output and the desired value for that output. Weight adjusted by repeating process of repeating attributes. A small change in weight to get the desired value is done by a process called network training and performed by the training set (learning rules).

Radial basic functions

Radial basic function (RBF) is a function that has built into it the criteria of distance to the center. These functions can be used very efficiently for interpolation and for smooth data. Radial basic functions have been applied in the area of ​​neural networks where they are used as a substitute for the sigmoidal transfer function. The network has 3 layers, an input layer, a hidden layer with non-linearity RBF and linear output layer. The most popular choice for non-linearity is Gaussian. The RBF network has the advantage of being unlocked into local minima as well as advanced feed networks such as multilayer perceptron.

Support vector engine

Vector engine support (SVM) is used to detect and exploit complex patterns in data by grouping, classifying and ranking data. They studied machines used to perform binary classifications and regression estimates. They usually use kernel-based methods to apply linear classification techniques to non-linear classification problems. There are several types of SVM such as linear, polynomial, sigmoid, etc.

NaÃÆ'¯ve Bayes

NaÃÆ'¯ve Bayes based on Bayes conditional probability rules are used to perform classification tasks. NaÃÆ'¯ve Bayes assumes a statistically independent predictor that makes it an effective classification tool that is easily interpreted. It is best used when faced with the problem of "dimensional curse", ie when the number of predictors is very high.

k the nearest neighbor

The nearest neighbor algorithm (KNN) includes a class of statistical pattern recognition methods. This method does not impose any assumptions about the distribution from which modeling samples are made. This involves a set of training with positive and negative values. The new sample is classified by calculating the distance to the nearest neighbor training case. The dot will determine the classification of the sample. In the nearest k-neighbor group, the nearest k points are considered and the majority sign is used to classify the sample. The performance of the kNN algorithm is influenced by three main factors: (1) the distance measure used to locate the nearest neighbor, (2) the decision rule used to obtain the classification of the nearest neighbors, and (3) the neighboring number used to classify the new sample. It can be proved that, unlike other methods, this method is universally convergent asymptomatic, that is because the size of the training set increases, if the observations are independent and identically distributed (iid), regardless of the distribution from which the samples are taken, the classes predicted to coalesce with class assignments that minimize misclassification errors. See Devroy et al.

Geospatial predictive modeling

Conceptually, geospatial predictive modeling is rooted in the principle that the event events modeled are limited in distribution. Incident events are not uniform or random in distribution - there are spatial environmental factors (infrastructure, sociocultural, topography, etc.) that limit and influence where the scene occurs. Geospatial predictive modeling attempts to illustrate these constraints and effects with events that spatially link historical geospatial locations with environmental factors that represent these constraints and influences. Geospatial predictive modeling is a process for analyzing events through geographic filters to make statements of probable occurrences or occurrences.

Deploy Business-Specific Predictive Analytics in Three Easy Steps
src: cdn.sdcexec.com


Tools

Historically, using predictive analysis tools - as well as understanding the results they convey - requires advanced skills. However, modern predictive analytical tools are no longer confined to IT specialists. As more organizations use predictive analytics in the decision-making process and integrate it into their operations, they create a shift in the market toward business users as the primary consumers of that information. Business users want tools they can use on their own. Vendors respond by creating new software that eliminates mathematical complexity, providing an easy-to-use graphical interface and/or build on shortcuts that can, for example, recognize available data types and suggest appropriate predictive models. Predictive analysis tools have become sophisticated enough to adequately present and dissect data problems, so that every information-informed worker can use them to analyze data and get useful and useful results. For example, modern tools present findings using simple charts, graphs, and scores showing possible possible outcomes.

There are many tools available in the market that help the implementation of predictive analysis. These range from those who require very little user sophistication for those designed for expert practitioners. The difference between these tools is often in the level of customization and the removal of weight data is allowed.

Some predictive analytic tools of open-source software include:

Commercial predictive analytic tools include:

In addition to this software package, special tools have also been developed for industrial applications. For example, the Watchbox Agenbox Agent has been developed and optimized for predictive analysis in prognostic and health management applications and is available for MATLAB and LabVIEW.

The most popular commercial predictive analysis software packages according to Rexer Analytics Survey for 2013 are IBM SPSS Modeler, SAS Enterprise Miner, and Dell Statistica.

PMML

The Predictive Model Markup Language (PMML) is proposed for standard languages ​​to express predictive models. Such an XML-based language provides a way for different tools to define predictive models and to share them. PMML 4.0 was released in June 2009.

Predictive Analytics: What it is and why it matters | SAS
src: www.sas.com


Criticism

There are many skeptics when it comes to computers' and algorithmic ability to predict the future, including Gary King, a professor from Harvard University and director of the Institute for Quantitative Social Science. People are affected by their environment in countless ways. Predicting perfectly what others will do requires that all influential variables are known and accurately measured. "The human environment changes even faster than they do, everything from the weather to their relationship with mothers can change the way people think and act.The variables are unpredictable How they will impact a person even less predictable. in the same situation tomorrow, they may make an entirely different decision.This means that statistical prediction only applies in sterile laboratory conditions, which are suddenly useless as seen before. "

Source of the article : Wikipedia

Comments
0 Comments