Correlative Analytics (google’s way of doing science)

A friend sent me this link to an article describing a way of doing science (making predictions) without any theory or hypothetical model to explain the observed data. Instead, if the data is large enough (petabyte levels of data) then what are required are clever statistical algorithms to find correlations in the data and thereby make predictions. No theories or hypothetical models needed to see which one fits the data best.

These are powerful techniques opened up by having access to huge amounts of data and the writer of the article argues that these techniques will not involve discarding the scientific method but could complement it.

Advertisements

One Response to “Correlative Analytics (google’s way of doing science)”

  1. Jerry Jasperson Says:

    This is very interesting!

    About a decade ago, I began thinking about software development, and what a piece of software, ie. application, ultimately boils down to; a series of 1’s and 0’s.

    All the software that could ever be created, including the most bug filled, could be produced by simply producing vast permutations of 1’s and 0’s. Of course, finding the “right” applications would prove to be impossible. However, I suspect that there are statistical characteristics amongst various types of well formed programs that could narrow the search for the “right” applications.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s