Setting up the Server for OSS DSS

The first thing to do when setting up your server with open source solutions [OSS] for a decision support system [DSS] is to check all the dependencies and system requirements for the software that you're installing.

Generally, in our case, once you make sure that your software will work on the version of your operating system that you're running, the major dependency is Java. Some of the software that we're running may have trouble with openJDK, and others may require the Java software development kit [JDK or Java SDK], and not just the runtime environment [JRE]. For example, Hadoop 0.20.2 may have problems with openJDK, and versions before LucidDB 0.9.3 required the JDK. Once upon a time, two famous database companies would issue system patches that we're required for their RDBMS to run, but would break the other, forcing customers to have only one system on a host. A true pain for development environments.

Since I don't know when you'll be reading this, or if you're planning to use different software than I'm using, I'm just going to suggest that you check very carefully that the system requirements and software dependencies are fulfilled by your server.

Now that we're sure that the *Nix or Microsoft operating system that we're using will support the software that we're using, the next step is to set up a system user for each software package. Here's examples for a *Nix operating systems: Linux kernel 2.x derived and the BSD derived, MacOSX. I've tested this on Red Hat Enterprise Linux 5, OpenSUSE 11, MacOSX 10.5 [Leopard] and 10.6 [Snow Leopard].

On Linux, at the command line interface [CLI]:

useradd -c "name your software Server" -s /bin/bash -mr USERNAME
- c COMMENT is the comment field used as the user's full name
-s SHELL defines the login shell
-m create the home directory
-r create as a system user

Likely, you will need to run this command through sudo, and may need the full path:


Change the password

sudo passwd USERNAME

Here's one example, setting up the Pentaho system user.

poc@elf:~> sudo /usr/sbin/useradd -c "Pentaho BI Server" -s /bin/bash -mr pentaho
poc@elf:~> sudo passwd pentaho
root's password:
Changing password for pentaho.
New Password:
Reenter New Password:
Password changed.

On the Mac, do the following

vate:~ poc$ sudo dscl /Local/Default -create /Users/_pentaho RealName "PentahoCE BI Server" UserShell /bin/bash
vate:~ poc$ sudo sudo passwd _pentaho
Changing password for _pentaho.
New Password:
Reenter New Password:
Password changed.
vate:~ poc$

On Windows you'll want to set up your server software as service, after the installation.

If you haven't already done so, you'll want to download the software that you want to use from the appropriate place. In many cases this will be Sourceforge. Alternate sources might be the Enterprise Editions of Pentaho, the DynamoBI downloads for LucidDB, SQLstream, SpagoWorld, The R-Project, Hadoop, and many more possibilities.

Installing this software is no different than installing any other software on your particular operating system:

  • On any system you may need to unpack an archive indicated by a .zip, .rar, .gz or .tar file extension. On Windows & MacOSX you will likely just double-click the archive file to unpack it. On *Nix systems, including MacOSX and linux, you may also use the CLI and a command such as gunzip, unzip, or tar xvzf
  • On Windows, you'll likely double-click a .exe file and follow the instructions from the installer.
  • On MacOSX, you might double-click a .dmg file and drag the application into the Applications directory, or you'll do something more *Nix like.
  • On Linux systems, you might, at the CLI, execute the .bin file as the system user that you set up for this software.
  • On *Nix systems, you may wish to install the server-side somewhere other than a user-specific or local Applications directory, such as /usr/local/ or even in a web-root.

One thing to note is that most of the software that you'll use for an OSS DSS uses Java, and that the latest Pentaho includes the latest Java distribution. Most other software doesn't. Depending on your platform, and the supporting software that you have installed, you may wish to point [softwareNAME]_JAVA_HOME to the Pentaho Java installation, especially if the version of Java included with Pentaho meets the system requirements for other software that you want to use, and you don't have any other compatible Java on your system.

For both security, and a to avoid any confusion, you might want to change the ports used by the software you installed from their defaults.

You may need to change other configuration files from their defaults for various reasons as well, though I generally find the defaults to be satisfactory. You may need to install other software from one package into another package, for compatibility or interchange. For example, if you're trying out, or if you've purchased, Pentaho Enterprise Edition with Hadoop, Pentaho provides Java libraries [JAR files]and licenses to install on each Hadoop node, including code that Pentaho has contributed to the Hadoop project.

Also remember that Hadoop is a top-level Apache project, and not usable software in and of itself. It contains subprojects that make it useful:

  • Hadoop Commons containing the utilities that support all the rest
  • HDFS - the Hadoop Distributed File System
  • MapReduce - the software framework for distributed processing of data on clusters

You may also want one or more of the other Apache subprojects related to Hadoop:

  • Avro - a data serialization system
  • Chukwa - a data collection system
  • HBase - a distributed database management system for structured data
  • Hive - a data warehouse infrastructure
  • Mahout - a data mining library
  • Pig - an high-level data processing language for parallelization
  • Zookeeper - a coordination service for distributed applicaitons

Reading Pentaho Kettle Solutions

On a rainy day, there's nothing better than to be sitting by the stove, stirring a big kettle with a finely turned spoon. I might be cooking up a nice meal of Abruzzo Maccheroni alla Chitarra con Polpettine, but actually, I'm reading the ebook edition of Pentaho Kettle Solutions: Building Open Source ETL Solutions with Pentaho Data Integration on my iPhone.

Some of my notes made while reading Pentaho Kettle Solutinos:

…45% of all ETL is still done by hand-coded programs/scripts… made sense when… tools have 6-figure price tags… Actually, some extractions and many transformations can't be done natively in high-priced tools like Informatica and Ab Initio.

Jobs, transformations, steps and hops are the basic building blocks of KETTLE processes

It's great to see the Agile Manisto quoted at the beginning of the discussion of AgileBI. 

BayAreaUseR October Special Event

Zhou Yu organized a great special event for the San Francisco Bay Area Use R group, and has asked me to post the slide decks for download. Here they are:

No longer missing is the very interesting presentation by Yasemin Atalay showing the difference in plotting analysis using the Windermere Humic Aqueous Model for river water environmental factors, without using R and then the increased in variety and accuracy of analysis and plotting gained by using R.

Search Terms for Data Management & Analytics

Recently, for a prospective customer, I created a list of some search terms to provide them with some "late night" reading on data management & analytics. I've tried these terms out on Google, and as suspected, for most, the first hit is for Wikipedia. While most articles in Wikipedia need to be taken with a grain of salt, they will give you a good overview. [By the way, I use the "Talk" page on the articles to see the discussion and arguments about the article's content as an indicator of how big a grain of salt is needed for that article] &#59;) So plug these into your favorite search engine, and happy reading.

  • Reporting - top two hits on Google are Wikipedia, and, interestingly, Pentaho
  • Ad-hoc reporting
  • OLAP - one of the first page hits is for Julian Hyde's blog, creator of the open source tool for OLAP, Mondrian, as well as real-time analytics engine, SQLstream
  • Enterprise dashboard - interestingly, Wikipedia doesn't come up in the top hits for this term on Google, so here's a link for Wikipedia:
  • Analytics - isn't very useful as a search term, but the product page from SAS gives a nice overview
  • Advanced Analytics - is mostly marketing buzz, so be wary of anything that you find using this as search term

Often, Data Mining, Machine Learning and Predictives are used interchangeably. This isn't really correct, as you can see from the following five search terms…

  • Data Mining
  • Machine Learning
  • Predictive Analytics
  • Predictive Intelligence - is an earlier term for Predictives that has mostly been supplanted by Predictive Analytics. I actually prefer just "Predictives".
  • PMML - Predictive Modeling Markup Language - is a way of transporting predictive models from one software package to another. Few packages will both export and import PMML. The lack of that capability can lock you into a solution, making it expensive to change vendors. The first hit for PMML on Google today is the Data Mining Group, which is a great resource. One company listed, Zementis, is a start-up that is becoming a leader in running data mining and predictive models that have been created anywhere
  • R - the R statistical language, is difficult to search on Google. Go to and … instead. R is useful for writing applications for any type of statistical analysis, and is invaluable for creating new algorithms and predictive models
  • ETL - Extract, Transform & Load, is the most common way of getting information from source systems to analytic systems
  • ReSTful Web Services - Representational State Transfer - can expose data as a web service using the four verbs of the web
  • SOA
  • ADBMS - Analytic Database Management Systems doesn't work well as a search term. Start with the site and follow the links from the Eigenbase subproject, LucidDB. Also, check out AsterData
  • Bayes - The Reverend Thomas Bayes came up with this interesting approach to statistical analysis in the 1700s. I first started creating Bayesian statistical methods and algorithms for predicting reliability and risk associated with solid propellant rockets. You'll find good articles using Bayes as a search term in Google. A bit denser article can be found at And some interesting research using Bayes can be found at: Andrew Gelman's Blog. You're likely familiar with one common Bayesian algorithm, naïve Bayes, which is used by most anti-spam email programs. Other forms are objective Bayes with non-informative priors and the original Subjective Bayes. I have an old aerospace joke about the Rand Corporation's Delphi method, based on subjective Bayes :-) I created my own methodology, and don't really care for naïve Bayes nor non-informative priors.
  • Sentiment Analysis - which is one of Seth Grimes' current areas of research
  • Decision Support Systems - in addition to searching on Google, you might find my recent OSS DSS Study Guide of interest

Let me know if I missed your favorite search term for data management & analytics.

Data Artisan Smith or Scientist

Over the past few months, a debate has been proceeding on whether or not a new discipline, a new career path, is emerging from the tsunami of data bearing down on us. The need for a new type of Renaissance [Wo]Man to deal with the Big Data onslaught. To whit, Data Science.

I'm writing about this now, because last night, at an every-three-week get together devoted to cask beer and data analysis, the topic came up. [Yes, every-THREE-weeks - a month is too long to go without cask beer fueled discussions of Rstats, BigData, Streaming SQL, BI and more.] The statisticians in the group, including myself, strongly disagreed with the way the term is being used; the software/database types were either in favor or ambivalent. We all agreed that a new, interdisciplinary approach to Big Data is needed. Oh, and I'll stay on topic here, and not get into another debate as to the definition of "Big Data". &#59;)

This lively conversation reinforced my desire to write about Data Science that swelled up in me after reading "What is Data Science?" by Mike Loukides published on O'Reilly Radar, and a subsequent discussion on Twitter held the following weekend, concerning data analytics.

The term "Data Science" isn't new, but it is taking on new meanings. The Journal of Data Science published JDS volume 1, issue 1 in January of 2003. The Scope of the JDS is very clearly related to applied statistics

By "Data Science", we mean almost everything that has something to do with data: Collecting, analyzing, modeling...... yet the most important part is its applications --- all sorts of applications. This journal is devoted to applications of statistical methods at large.
-- About JDS, Scope, First Paragraph

There is also the CODATA Data Science Journal, which appears to have last been updated in August of 2007, and currently has no content, other than its self-description as

The Data Science Journal is a peer-reviewed electronic journal publishing papers on the management of data and databases in Science and Technology.

I think that two definitions can be derived from these two journals.

  1. Data Science is systematic study, through observation and experiment, of the collection, modeling, analysis, visualization, dissemination, and application of data.
  2. Data Science is the use of data and database technology within physical and natural sciences and engineering.

I can agree with the first, especially with the JDS Scope clearly stating that Data Science is applied statistics.

The New Oxford American Dictionary, on which the Apple Dictionary program is based, defines science as a noun

the intellectual and practical activity encompassing the systematic study of the structure and behaviour of the physical and natural world through observations and experiments.

And a similar definition of science can be found on

In many ways, I like Mike Loukides' article "What is Data Science?" in how it highlights the need for this new discipline. I just don't like what he describes to be the new definition of "data science". Indeed, I very much disagree with this statement from the article.

Using data effectively requires something different from traditional statistics, where actuaries in business suits perform arcane but fairly well-defined kinds of analysis. What differentiates data science from statistics is that data science is a holistic approach. We're increasingly finding data in the wild, and data scientists are involved with gathering data, massaging it into a tractable form, making it tell its story, and presenting that story to others.

A statistician is not an actuary. They're very different roles. I know this because I worked for over a decade applying statistics to determining the reliability and risk associated with very large, complex systems such as rockets and space-borne astrophysics observatories. I once hired a Cal student as an intern because she feared that the only career open to her as a math major, was to be an actuary. I showed her a different path. So, yes, I know, from experience, that a statistician is not an actuary. Actually, the definition of a data scientist given, that is "gathering data, massaging it into a tractable form, making it tell its story, and presenting that story to others" is exactly what a statistician does.

I do however see the need for a new discipline, separate from applied statistics, or data science. The massive amount of data to come from an instrumented world with strongly interconnected people and machines, and real-time analysis, inference and prediction from those data, will require inter-disciplinary skills. But I see those skills coming together in a person who is more of a smith, or, as Julian Hyde put it last night, an artisan. Falling back on the old dictionary again, a smith is someone who is skilled in creating something with a specific material; an artisan is someone who is skilled in a craft, making things by hand.

Another reason that I don't like the term "data science" for this interdisciplinary role, stems from what Mike Loukides describes in his article "What is Data Science?" as the definition for this new discipline "Data science requires skills ranging from traditional computer science to mathematics to art". I agree that the new discipline requires these three things, and more, even softer skills. I disagree that these add up to data science.

I even prefer "data geek", as defined by Michael E. Driscoll in "The Three Sexy Skills of Data Geeks". Michael Driscoll's post of 2009 May 27 certainly agrees skill-wise with Mike Loukides post of 2010 June 02.

  1. Skill #1: Statistics (Studying)
  2. Skill #2: Data Munging (Suffering)
  3. Skill #3: Visualization (Storytelling)

And I very much prefer "Data Munging" to "Computer Science" as one of the three skills.

I'll stick to the definition that I gave above for data science as "systematic study, through observation and experiment, of the collection, modeling, analysis, visualization, dissemination, and application of data". This is also applied statistics. So, what else is needed for this new discipline? Well, Mike and Michael are correct: computer skills, especially data munging, and art. Well, any statistician today has computer skills, generally in one or more of SAS, SPSS, R, S-plus, Python, SQL, Stata, MatLab and other software packages, as well as familiarity with various data storage & management methods. Some statisticians are even artists, perhaps as story tellers, as evidenced by that rare great teacher or convincing expert witness, perhaps as visualizers, creating statistically accurate animations to clearly describe the analysis, as evidenced by the career of that intern I hired so many years ago.

The data smith, the data artisan, must be comfortable with all forms of data:

  • structured,
  • unstructured and
  • semi-structured

Just as any other smith, someone following this new discipline might serve an apprenticeship creating new things from these forms of data such as a data warehouse or an OLAP cube, a sentiment analysis or a streaming SQL sensor web, or a recommendation engine or complex system predictives. The data smith must become very comfortable with putting all forms of data together in new ways, to come to new conclusions.

Just as a goldsmith will never make a piece of jewelry identical to the one finished days before, just as art can be forged but not duplicated, the data smith, the data artisan will glean new inferences every time they look at the data, will make new predictions with every new datum, and the story they tell, the picture they paint, will be different each time.

And perhaps then, the data smith becomes a master, an artisan.

PS: Here's a list of links to that Twitter conversation among some of the most respected people in the biz, on Data Analytics


Technology for the OSS DSS Study Guide

'Tis been longer than intended, but we finally have the technology, time and resources to continue with our Open Source Solutions Decision Support System Study Guide (OSS DSS SG).

First, I want to thank SQLstream for allowing us to use SQLstream as a part of our solution. As mentioned in our "First DSS Study Guide" post, we were hoping to add a real-time component to our DSS. SQLstream is not open source, and not readily available for download. It is however, a co-founder and core contributer to the open source Eigenbase Project, and has incorporated Eigenbase technology into its product. So, what is SQLstream? To quote their web site, "SQLstream enables executives to make strategic decisions based on current data, in flight, from multiple, diverse sources". And that is why we are so interested in having SQLstream as a part of our DSS technology stack: to have the capability to capture and manipulate data as it is being generated.

Today, there are two very important classes of technologies that should belong to any DSS: data warehousing (DW) and business intelligence (BI). What actually comprises these technologies is still a matter of debate. To me, they are quite interrelated and provide the following capabilities.

  • The means of getting data from one or more sources to one or more target storage & analysis systems. Regardless of the details for the source(s) and the target(s), the traditional means in data warehousing is Extract from the source(s), Transform for consistency & correctness, and Load into the target(s), that is, ETL. Other means, such as using data services within a services oriented architecture (SOA) either using provider-consumer contracts & Web Service Definition Language (WSDL) or representational state transfer (ReST) are also possible.
  • Active storage over the long term of historic and near-current data. Active storage as opposed to static storage, such as a tape archive. This storage should be optimized for reporting and analysis through both its logical and physical data models, and through the database architecture and technologies implemented. Today we're seeing an amazing surge of data storage and management innovation, with column-store relational database management systems (RDBMS), map-reduce (M-R), key-value stores (KVS) and more, especially hybrids of one or several of old and new technologies. The innovation is coming so thick and fast, that the terminology is even more confused than in the rest of the BI world. NoSQL has become a popular term for all non-RDBMS, and even some RDBMS like column-store. But even here, what once meant No Structured Query Language now is often defined as Not only Structured Query Language, as if SQL was the only way to create an RDBMS (can someone say Progress and its proprietary 4GL).
  • Tools for reporting including gathering the data, performing calculations, graphing, or perhaps more accurately, charting, formating and disseminating.
  • Online Analytical Processing (OLAP) also known as "slice and dice", generally allowing forms of multi-dimensional or pivot analysis. Simply put, there are three underlying concepts for OLAP: the cube (a.k.a. hypercube, multi-dimensional database [MDDB] or OLAP engine), the measures (facts) & dimensions, and aggregation. OLAP provides much more flexibility than reporting, though the two often work hand-in-hand, especially for ad-hoc reporting and analysis.
  • Data Mining, including machine learning and the ability to discover correlations among disparate data sets.

For our purposes, an important question is whether or not there are open source, or at least open source based, solutions for all of these capabilities. The answer is yes. As a matter of fact, there are three complete open source BI Suites [there were four, but the first, written in PERL, the Bee Project from the Czech Republic, is no longer being updated]. Here's a brief overview of SpagoBI, JasperSoft, and Pentaho.

Capability SpagoBI JasperSoft Pentaho
ETL Talend Talend
Reporting BIRT
Analyzer jPivot
OLAP Mondrian Mondrian Mondrian
Data Mining Weka None Weka

We'll be using Pentaho, but you can use any of the these, or any combination of the OSS projects that are used by these BI Suites, or pick and choose from the more than 60 projects in our OSS Linkblog, as shown in the sidebar to this blog. All of the OSS BI Suites have many more features than shown in the simple table above. For example, SpagoBI has good tools for geographic & location services. Also, JasperSoft Professional and Enterprise Editions have many features than their Community Edition, such as Ad Hoc Reporting and Dashboards. Pentaho has a different Analyzer in their Enterprise Edition than either jPivot or PAT, Pentaho Analyzer, based upon the SaaS ClearView from the now-defunct LucidEra, as well as ease-of-use tools such as an OLAP schæma designer, and enterprise class security and administration tools.

Data warehousing using general purpose RDBMS systems such as Oracle, EnterpriseDB, PostrgeSQL or MySQL, are gradually giving way to analytic database management system (ADBMS), or, as we mentioned above, the catch-all NoSQL data storage systems, or even hybrid systems. For example, Oracle recently introduced hybrid column-row store features, and Aster Data has a column-store Massive Parallel Processing (MPP) DBMS|map-reduce hybrid [updated 20100616 per comment from Seth Grimes]. Pentaho supports Hadoop, as well as traditional general purpose RDBMS and column-store ADMBS. In the open source world, there are two columnar storage engines for MySQL, Infobright and Calpont InfiniDB, as well as one column-store ADBMS purpose built for BI, LucidDB. We'll be using LucidDB, and just for fun, may throw some data into Hadoop.

In addition, a modern DSS needs two more primary capabilities. Predictives, sometimes called predictive intelligence or predictive analytics (PA), which is the ability to go beyond inference and trend analysis, assigning a probability, with associated confidence, or likelihood of an event occurring in the future, and full Statistical Analysis, which includes determining the probability density or distribution function that best describes the data. Of course, there are OSS projects for these as well, such as The R Project, the Apache Common Math libraries, and other GNU projects that can be found in our Linkblog.

For statistical analysis and predictives, we'll be using the open source R statistical language and the open standard predictive model markup language (PMML), both of which are also supported by Pentaho.

We have all of these OSS projects installed on a Red Hat Enterprise Linux machine. The trick will be to get them all working together. The magic will be in modeling and analyzing the data to support good decisions. There are several areas of decision making that we're considering as examples. One is fairly prosaic, one is very interesting and far-reaching, and the others are somewhat in between.

  1. A fairly simple example would be to take our blog statistics, a real-time stream using SQLstream's Twitter API, and run experiments to determine whether or not, and possibly how, Twitter affects traffic to and interaction with our blogs. Possibly, we could get to the point where we can predict how our use of Twitter will affect our blog.
  2. A much more far-reaching idea was presented by Ken Winnick to me, via Twitter, and has created an on-going Twitter conversation and hashtag, #BPgulfDB. Let's take crowd sourced, government, and other publicly available data about the recent oilspill in the Gulf of Mexico, and analyze it.
  3. Another idea is to take historical home utility usage plus current smart meter usage data, and create a real-time dashboard, and even predictives, for reducing and managing energy usage.
  4. We also have the opportunity of using public data to enhance reporting and analytics for small, rural and research hospitals.

OSS DSS Formalization

The next step in our open source solutions (OSS) for decision support systems (DSS) study guide (SG), according to the syllabus, is to make our first decision: a formal definition of "Decision Support System". Next, and soon, will be a post listing the technologies that will contribute to our studies.

The first stop in looking for a definition of anything today, is Wikipedia. And indeed, Wikipedia does have a nice article on DSS. One of the things that I find most informative about Wikipedia articles, is the "Talk" page for an article. The DSS discussion is rather mild though, no ongoing debate as can be found on some other talk pages, such as the discussion about Business Intelligence. The talk pages also change more often, and provide insight into the thoughts that go into the main article.

And of course, the second stop is a Google search for Decision Support System; a search on DSS is not nearly as fruitful for our purposes. :)

Once upon a time, we might have gone to a library and thumbed through the card catalog to find some books on Decision Support Systems. A more popular approach today would be to search Amazon for Decision Support books. There are several books in my library that you might find interesting for different reasons:

  1. Pentaho Solutions: Business Intelligence and Data Warehousing with Pentaho and MySQL by Roland Bouman & Jos van Dongen provides a very good overview of data warehousing, business intelligence and data mining, all key components to a DSS, and does so within the context of the open source Pentaho suite
  2. Smart Enough Systems: How to Deliver Competitive Advantage by Automating Hidden Decisions by James Taylor & Neil Raden introduces business concepts for truly managing information and using decision support systems, as well as being a primer on data warehousing and business intelligence, but goes beyond this by automating the data flow and decision making processes
  3. Business Intelligence Roadmap: The Complete Project Lifecycle for Decision-Support Applications by Larissa T. Moss & Shaku Atre takes a business, program and project management approach to implementing DSS within a company, introducing fundamental concepts in a clear, though simplistic level
  4. Competing on Analytics: The New Science of Winning by Thomas H. Davenport & Jeanne G. Harris in many ways goes into the next generation of decision support by showing how data, statistical and quantitative analysis within a context specific processes, gives businesses a strong lead over their competition, albeit, it does so at a very simplistic, formulaic level

These books range from being technology focused to being general business books, but they all provide insight into how various components of DSS fit into a business, and different approaches to implementing them. None of them actually provide a complete DSS, and only the first focuses on OSS. If you followed the Amazon search link given previously, you might also have noticed that there are books that show Excel as a DSS, and there is a preponderance of books that focus on the biomedical/pharmaceutical/healthcare industry. Another focus area is in using geographic information systems (actually one of the first uses for multi-dimensional databases) for decision support. There are several books in this search that look good, but haven't made it into my library as yet. I would love to hear your recommendations (perhaps in the comments).

From all of this, and our experiences in implementing various DW, BI and DSS programs, I'm going to give a definition of DSS. From a previous post in this DSS SG, we have the following:

A DSS is a set of processes and technology that help an individual to make a better decision than they could without the DSS.
-- Questions and Commonality

As we stated, this is vague and generic. Now that we've done some reading, let's see if we can do better.

A DSS assists an individual in reaching the best possible conclusion, resolution or course of action in stand-alone, iterative or interdependent situations, by using historical and current structured and unstructured data, collaboration with colleagues, and personal knowledge to predict the outcome or infer the consequences.

I like that definition, but your comments will help to refine it.

Note that we make no mention of specific processes, nor any technology whatsoever. It reflects my bias that decisions are made by individuals not groups (electoral systems not withstanding). To be true to our "TeleInterActive Lifestyle" &#59;) I should point out that the DSS must be available when and where the individual needs to make the decision.

Any comments?

R the next Big Thing or Not

Recently, AnnMaria De Mars, PhD (multiple) and Dr. Peter Flom, PhD have stirred up a bit of a tempest in a tweet-pot, as well as in the statistical blogosphere, with comparisons of R and SAS, IBM/SPSS and the like. I've commented on both of their blogs, but decided to expand a bit here, as the choice of R is something that we planned to cover in a later post to our Open Source Solutions Decision Support Systems Study Guide. First, let me say that Dr. De Mars and Dr. Flom appear to have posted completely independently of each other, and further, that their posts have different goals.

In The Next Big Thing, Dr. De Mars is looking for the next big thing, both to keep her own career on-track, and to guide students into areas of study that will be survive in the job market in the coming decades. This is always difficult for mentors, as we can't always anticipate the "black swan" events that might change things drastically. The tempestuous nature of her post came from one little sentence:

Contrary to what some people seem to think, R is definitely not the next big thing, either. -- AnnMaria De Mars, The Next Big Thing, AnnMaria's Blog

In SAS vs. R, Introduction and Request, Dr. Flom starts a series comparing R and SAS from the standpoint of a statistician deciding upon tools to use.

There are several threads in Dr. De Mars post. I agree with Dr. De Mars that two of the "next big things" in data management & analysis are data visualization and dealing with unstructured data. I'm of the opinion that there is a third area, related to the "Internet of Things" and the tsunami of data that will be generated by it. These are conceptual areas, however. Dr. De Mars quickly moves on to discussing the tools that might be a part of the solutions of these next big things. The concepts cited are neither software packages nor computing languages. The software packages SAS, IBM/SPSS, Stata, Pentaho and the like, and the computing language S, with its open source distribution R, and its proprietary distribution S+ are none likely to be the next big things, as they are currently useful tools to know.

I find it interesting that both Dr. De Mars and Dr. Flom, as well as the various commenters, tweeters, and other posters, are comparing software suites and applications with a computing language. I think that a bit more historical perspective might be needed in bringing these threads together.

In 1979, when I first sat down with a FORTRAN programmer to turn my Bayesian methodologies into practical applications to determine the reliability and risk associated with the STAR48 kick motor and associated Payload Assist Module (PAM), the statistical libraries for FORTRAN seemed amazing. The ease with which we were able to create the program and churn through decades of NASA data (after buying a 1MB memory box for the mainframe) was wondrous &#59;)

Today, there's not so much wonder from such a feat. The evolution of computing has drastically affected the way in which we apply mathematics and statistics today. Several of the comments to these posts argue both sides of the statement that anyone doing statistics today should be a programmer, or shouldn't. It's an interesting argument, that I've also seen reflected in chemistry, as fewer technicians are used in the lab, and the Ph.D.s work directly with the robots to prepare the samples and interpret the results.

Approximately 15 years ago, I moved from solving scientific and engineering problems directly with statistics, to solving business problems through vendor's software suites. The marketing names for this endeavor have gone through several changes: Decision Support Systems, Very Large Databases, Data Warehousing, Data Marts, Corporate Information Factory, Business Intelligence, and the like. Today, Data Mining, Data Visualization, Sentiment Analysis, "Big Data", SQL Streaming, and similar buzzwords reflect the new "big thing". Software applications, from new as well as established vendors, both open source and proprietary, are coming to the fore to handle these new areas that represent real problems.

So, one question to answer for students, is which, if any, of these software packages will best survive with and aid the growth of, their maturing careers. Will Tableau, LyzaSoft, QlikView or Viney@rd be in a better spot in 20 years, through growth or acquisition, than SAS or IBM/SPSS? Will the open source movement take down the proprietary vendors or be subsumed by them? Is Pentaho/Weka the BI & data mining solution for their career? Maybe, maybe not. But what about that other beast of which everyone speaks? Namely, R, the r-project, the R Statistical Language. What is it? Is it a worthy alternative to SAS or IBM/SPSS or Pentaho/Weka? Or is it a different genus altogether? That's a question I've been seeking to answer for myself, in my own career evolution. After 15 years, software such as SAP/Business Objects and IBM/Cognos, haven't evolved into anything that I like, with their pinnacle of statistical computation being the "average", the arithmetic mean. SAS and IBM/SPSS are certainly better, and with data mining, machine learning and predictives becoming important to business, certainly likely to be a good choice for the future. But are they really powerful enough? Are they flexible enough? Can they be used to solve the next generation of data problems?  They're very likely to evolve into software that can do so.  But how quickly?  And like all vendor software, they have limitations based upon the market studies and business decisions of the corporation.

How is R different?

Well, first, R is a computing language. Unlike SAP/Business Objects, IBM/Cognos, IBM/SPSS, SAS, Pentaho, JasperSoft, SpagoBI, or Oracle, it's not a company, nor a BI Suite, nor even a collection of software applications.  Second, R is an open source project. It's an open source implementation of S. Like C, and the other single letter named languages, S came out of Bell Labs, and in the case of R, in 1976. The open source implementation, R comes from R. Ihaka and R. Gentleman, first revealed in 1996 through the article, R: A language for data analysis and graphics. Journal of Computational and Graphical Statistics, 5:299–314, and is often associated with the Department of Statistics, University of Auckland.

While I'm not a software engineer, R is a very compelling statistical tool. As a language, it's very intuitive… for a statistician. It's an interactive, interpretive, functional, object oriented, statistical programming language. R itself is written in R, C, C++ and FORTRAN; it's powerful. As an open source project, it has attracted thousands upon thousands of users who have formed a strong community. There are thousands upon thousands of community contributed packages for R. It's flexible, and growing. One of the main goals of R was data visualization, and it has a wonderful new package for data visualization in ggplot2. It's ahead of the curve. There are packages for  parallel processing (some quite specific), for big data beyond in-memory capacity, for servers, and for embedding in a web site.  Get the idea?  If you think you need something in R, search CRAN, RForge, BioConductor or Omegahat.

As you can tell, I like R. :) However, in all honesty, I don't think that the SAS vs. R controversy is an either/or situation. SAS, IBM/SPSS and Pentaho complement R and vice-versa. Pentaho, IBM/SPSS and some SAS products support R. R can read data from SAS, IBM/SPSS , relational databases, Excel, mapReduce and more. The real question isn't is one tool better than another but rather selecting the best tool to answer a particular question.  That being said, I'm looking forward to Dr. Flom's comparison, as well as the continuing discussion on Dr. De Mars' blog.

For us, the question is building a decision support system or stack from open source components. It looks like we'll have a good time doing so.

OSS DSS Studies Introduction

First, let me say that we're talking about systems supporting the decisions that are made by human beings, not "expert systems" that automate decisions.  As an example, let's look at inventory management.  A human might use various components of a DSS to determine the amount of an item in stock, the demand for that item as a trend to determine when it might be out of stock, and predictives as to various factors (internal, external, environmental, political, etc) that might affect supply, to come to a decision as to how much and when to order more of that item.  An expert system might be created that could also determine when and how much of an item to oder, using neural networks, Bayesian nets or other algorithms.  The expert system might even take from the same DSS components (or directly from their underlying data) as the human might.  One could even run the expert system in parallel with humans making the decisions, scoring or otherwise evaluating the two, until the expert system is comparable or better than the expert system.  But, we're not really interested in expert systems in this study guide.  We'll be focusing on systems that help humans to make better decisions, not on automated feedback and control loops.

To me, a technology doesn't matter very much if it's not supporting some process, or a step within a process.  That process may be for personal reasons or supporting work activities. For this study guide, let's begin by continuing the discussion that we began in the previous posts, about the process by which one makes a decision, the steps, the events, the triggers and the consequences of making a decision.

I have my own process in making decisions.  I've played in executive and management roles for many years, and have been responsible for 5 P/L centers.  But this is a study guide, and while I intend to offer my own opinions and interpretations, we need some objective sources to study.  Let's start with a Google search.  Of course, Wikipedia has an article.  A site of which I've not heard before has the first hit with their article on problem-solving and decision-making.  Science Daily has a timely article from 2010 March 13 on how we really make decisions, our brain activity during decision making.  I also like the map from The Institute for Strategic Clarity.  Mindtools sets out a list of techniques and tools for aiding in the decision making process, and provides an important caveat "Do remember, though, that the tools in this chapter exist only to assist your intelligence and common sense. These are your most important assets in good Decision Making".  Reading through various reviews, the one book on decision making that I want to add to my library is The Managerial Decision-Making Process, 5th ed. by E. Frank Harrison.  From the Glossary of Political Economy Terms, we have:

Where formal organizations are the setting in which decisions are made, the particular decisions or policies chosen by decision-makers can often be explained through reference to the organization's particular structure and procedural rules. Such explanations typically involve looking at the distribution of responsibilities among organizational sub-units, the activities of committees and ad hoc coordinating groups, meeting schedules, rules of order etc. The notion of fixed-in-advance standard operating procedures (SOPs) typically plays an important role in such explanations of individual decisions made. -- Organizational process models of decision-making

Let's revisit and expand upon the summary that we gave in the third post in this series.

  1. As an individual faced with making a decision, I may want input from others, I may want consensus, but in the end, it is an individual decision, and I will bear the fruits of having made that decision.
  2. I need to put the problem, and my decision making, into context.  I have a variety of resources at my disposal to do so:

    • historical data
    • current information
    • structured data from transactional systems, master data, metadata, data warehouse, and other possible sources
    • unstructured data from blogs, wikis, Zotero libraries, Evernote, searches, bookmarks and similar sources
    • email
    • non-electronic correspondence, notes and conversations
    • personal experience
    • the experience of others garnered through water cooler and hallway conversations, formal meetings, twitter, phone calls and the like
  3. Now I need to understand all of these facts, opinions and conjecture at my disposal.  Part of this sifting all of it through my internal filters, using my "gut".  Part is using the various reporting and analytical tools at my disposal, and then filtering those through my gut.  And really, this and the next point will constitute the majority of this OSS DSS Study Guide - the tools we use.
  4. As I contemplate the various decisions that I might make from all of this, I want to understand the consequences of each potential decision: might this decision lead to a better product, more profit, less profit, broader market penetration, higher reliability, or even an alternate universe.
  5. As I make this decision, I'll want to collaborate with others.  Ideally, I'll want to collaborate within the context of my decision support system. Once upon a time we would do this by embedding the tools within a portal system, now we take a more master data management approach, and use a services oriented architecture with either web services description language (WSDL) or representational state transition (ReST) application programming interfaces (APIs) to the collaborative environment, usually a wiki.

In summary, this introduction has set up a framework for a decision-making process for an individual to use a decision support system.  The majority of this study guide will be to expore the actual decision support system, and the open source tools from which we can build such a system.

Syllabus for OSS DSS Studies

As promised, here's the syllabus for our study guide to decision support systems using open source solutions. We'll start with a first draft on 2010-03-23, and update and change based on ideas, comments and lessons learned. So, please comment. :) The updates will be marked. Deletions will be marked with a strike-though and not removed.

  1. Introduction
    1. Continuing the discussion of the processes and technologies that constitute a decision support system
    2. Formalizing a definition of DSS as well as the components, such as business intelligence (BI) that contribute to a DSS
    3. Providing [and updating] the list of references for this study guide
  2. Preparation
    1. Discussing the technology for use in this study guide including the client(s) and server (Red Hat Enterprise Linux 5)
    2. Checking for prerequisites for the open source solutions that will be used
    3. Hands-on exercises for preparing the system
  3. Installation
    1. Pointers and examples for installing the open source server-side packages including but not limited to:
      1. LucidDB
      2. Pentaho BI-Server, including PAT, and Administrative Console
      3. RServe and/or RApache
    2. Pointers for installation of client-side software and some examples on MacOSX
  4. Modeling
    1. Generally, we would determine the models, the architecture and then one (or more competing) design(s) to satisfy that architecture, including selecting the right technical solutions for the job at hand. Here, we're creating a learning environment for certain tools, so we're introducing the architecture and design studies after the technology installs.
    2. In general, this section will explore the various means of modeling processes, systems and data, specifically as these relate to making decisions.
    3. Decision Making Processes
      1. Decision Theory
      2. Game Theory
      3. Machine Learning & Data Mining
      4. Bayes and Iterations
      5. Predictives
    4. Information Flow
    5. Mathematical Modeling
    6. Data Modeling
    7. UML
    8. Dimensional Modeling
    9. PMML
  5. Architecture and Design
    1. In this section, we'll examine the differences between enterprise and system architecture, and between architecture and design. We'll look at various architectural and design elements that might influence both policy and technology directions.
    2. Discussing Enterprise Architecture, especially the translation between the user needs and technology/operational realities
    3. System Architecture
    4. SOA, ReST, WSDL, and Master Data Management
    5. Technology selection and vendor bake-offs
  6. Implementation Considerations
    1. Discussing the various philosophies and considerations for implementing any DSS, or really, any system integration project. We'll look at our own three track implementation methodology, as well as how the new Pentaho Agile BI tools support our method. In addition, we'll consider how we'll get all these OSS tools working together, on the same data sets, as well as, the importance of managing data about the data.
    2. Pentaho Agile BI and our own 8D™ Method
    3. System and Data Integration
    4. Metadata
  7. Using the Tools
    1. This is the vaguest part of our syllabus. We'll be using the examples from our various references, but with the system we've set-up here, rather than the exact systems that the references use. For example, we'll be using LucidDB and not MySQL for the examples from Pentaho Solutions. Remember too, that this is a study guide, and not a oops meant to be a book written as a series of blog posts, so while we might vary from the reference materials, we'll always refer to them.
    2. ETL
    3. Reporting
    4. OLAP
    5. Data Mining & Machine Learning
    6. Statistical Analysis
    7. Predictives
    8. Workflow
    9. Collaboration
    10. Hmm, this should take years :D

July 2016
Mon Tue Wed Thu Fri Sat Sun
 << <   > >>
        1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30 31

The Open Source Solutions Blog is a companion to the Open Source Solutions for Business Intelligence Research Project, sponosred by InterActive Systems & Consulting, Inc. This Blog, a Wiki and Lens will be used to develop, support and publish the findings of our research into enterprise open source projects.

InterActive Systems & Consulting, Inc. (IASC) performs research in the areas of data analytics, collaboration and remote access.

InterASC Professional Services, a service mark of IASC, provides strategic consulting and project management for data warehousing, business intelligence and collaboration projects using proprietary and open source solutions. We formulate vendor-independent strategies and implement solutions for information management in an increasingly complex and distributed business environment, allowing secure data analysis and collaboration that provides enterprise information in the most valuable form to the right person, whenever and wherever needed.

TeleInterActive Networks, a service mark of IASC, hosts open source applications for small and medium enterprises including CMS, blogs, wikis, database applications, portals and mobile access. We provide the tools for SME to put their customer at the center of their business, and leverage information management in a way previously reserved for larger organizations.

37.540686772871 -122.516149406889



  XML Feeds


Our current thinking on sensor analytics ecosystems (SAE) bringing together critical solution spaces best addressed by Internet of Things (IoT) and advances in Data Management and Analytics (DMA) is here.

Recent Posts

powered by b2evolution CMS