CAMO Software
Home > Page

Statistics and Data Mining

 
Statistics and Data Mining In The Analysis of Massive Data Sets
By James Kolsky June 1997

  • Most Data Mining techniques are statistical exploratory data analysis tools.
  • Care must be taken to not "over analyze" the data.
  • Complete understanding of the data and its collection methods are particularly important.
  • Database sampling or cluster analysis may help in reducing the dimension and size of massive data sets.
  • More data visualization tools need to be developed to aid in the analysis of massive data sets.
In recent years some data sets have grown in size and complexity, reaching massive proportions. This phenomenon, due in part to increases in computing resources, results from Data Warehousing and customer led or "relationship" marketing. Data Warehousing is the combination of numerous, moderate sized data sets that had been kept separate, possibly at different locations, into a single, centralized data base. Customer led marketing, as opposed to product led marketing, has resulted in a great deal more data being collected about consumers than had been collected in the past. For example, computers now allow data from package bar codes to be electronically recorded as they are passed through checkout stand scanners. Thus, all products, bought at numerous locations, can quickly and easily be recorded. In addition, the availability of census records in electronic form can be used to examine extensive geographical and socioeconomic data about the regions in which the stores are located. From this data one can determine the areas where different marketing strategies work the best.

Researchers are challenged with drawing meaningful conclusions from these masses of data in a timely manner. One catch-phrase being bandied about with some regularity is "Data Mining." And, in fact, several software packages now have "Data Mining" options, inferring new methodologies that can be used to solve some of the problems associated with massive data sets. The phrase "data mining" has been applied to a host of procedures over the years, many with poor connotations, especially among statisticians. The current use of the phrase seems to have originated in the computer science field. A definition that has been repeated often is "a process that deals with the discovery of hidden knowledge, unexpected patterns, and new rules from large data bases, particularly the discovery of optimal clusters and interesting irregularities." Not surprisingly, this sounds suspiciously like the role of statistical analysis. And, in fact, many of these data mining techniques are similar to basic statistical methods of exploratory data analysis (EDA) and data visualization that have been used for years. More importantly, the same statistical issues that have plagued statisticians and non-statisticians alike in their analysis efforts have not, in any way, been resolved by the use of Data Mining software. Consequently, Data Mining software, rather than being a panacea, may add little to the toolboxes of researchers already familiar and with access to basic statistical tools.

So what are these issues? First, and foremost, massive data sets are still collections of data, and it is important to understand how the data was collected. Any conclusions from the analysis will only be as good as the original data. Researchers saddled with "bad" data are faced with a severe disadvantage and, in extreme cases, not even sophisticated statistical techniques can address the objectives to any degree of satisfaction. One cause of bad data is the poor specification of objectives. For instance, poorly worded or vague objectives can lead to the collection of data that answers the wrong question. Poor data collection methods can also create biases in the data or result in data not representative of the population that was to be sampled. Other problems include data that has been aggregated over important variables and data sets with large amounts of missing data. Most of these problems are easily avoided by spending time prior to the data collection stage carefully outlining the objectives and by performing short pilot studies. Pilot studies are good, cost effective tools for identifying problems in protocols and design methodologies.

Extremely large data sets are usually quite complex, frequently containing scores of variables, many of which can only be described by non-linear relationships. Numerous variables may also interact with each other. These issues all combine to make many statistical procedures, such as Analysis of Variance or regression analysis, difficult to use. Care must also be taken such that data with many variables is not "over analyzed." Not matter how large the data set is originally, if it is cut into enough segments, significant differences will be found between groups simply by chance.

EDA and data visualization techniques, though primarily descriptive, remain the primary starting points to identifying relationships in the data. Such techniques include box plots and histograms of individual variables and scatter plots of pairs of variables. These graphical representations can lead to a reduction in the number of variables that must be addressed by highlighting strong trends or patterns in the data. More graphical displays, however, need to be developed that can better describe higher dimensional patterns in the data.

Alternatively, multivariate techniques such as cluster analysis may allow researchers to identify groups, or clusters, of related variables. By reducing the dimension of the data to a few clusters, it may be possible to use standard statistical tools for all subsequent analysis.

Constraints on software packages may prevent the standard analysis of data sets with massive numbers of observations. The amount of data needed to be analyzed can be reduced by sampling the data base. Software packages can then be used on the sampled data. Simple random samples, where each observation has the same probability of selection, are simple, commonly used plans. However, these assume that the data base to be sampled is homogenous. If there are clusters of data within the data base, a simple random sample will not be an effective tool and subsequent conclusions may be biased. In these cases, other sampling plans should be examined.

While large data sets introduce additional complications to their analysis, researchers should not disregard the basic statistical concepts that have served so well when analyzing smaller data sets. Data collection methods should reflect overall objectives and initial analysis should be composed of EDA and data visualization techniques. Once a complete understanding of the data has been gained more complicated methods, such as cluster analysis or data base sampling, can be attempted.

 

Reprinted with permission of the American Marketing Association (Marketing News :to be published)