Statistics:Losing Ground to CS

paper: Statistics:Losing Ground to CS (ASA)(CS)paper?

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

The American Statistical Association (ASA) leadership, and many in Statistics academia. have been undergoing a period of angst the last few years, They worry that the field of Statistics is headed for a future of reduced national influence and importance, with the feeling that:

The field is to a large extent being usurped by other disciplines, notably Computer Science (CS).

Efforts to make the field attractive to students have largely been unsuccessful.

I had been aware of these issues for quite a while, and thus was pleasantly surprised last year to see then-ASA president Marie Davidson write a plaintive editorial titled, Aren™t We Data Science

Good, the ASA is taking action, I thought. But even then I was startled to learn during JSM 2014 (a conference tellingly titled Statistics: Global Impact, Past, Present and Future) that the ASA leadership is so concerned about these problems that it has now retained a PR firm.

This is probably a wise move“most large institutions engage in extensive PR in one way or another“but it is a sad statement about how complacent the profession has become. Indeed, it can be argued that the action is long overdue; as a friend of mine put it, They [the statistical profession] lost the PR war because they never fought it.

In this post, I™ll tell you the rest of the story, as I see it, viewing events as a statistician, computer scientist and R activist.

CS vs. Statistics

Let™s consider the CS issue first. Recently a number of new terms have arisen, such as data science, Big Data, and analytics, and the popularity of the term machine learning has grown rapidly. To many of us, though, this is just old wine in new bottles, with the wine being Statistics. But the new bottles are disciplines outside of Statistics“especially CS.

I have a foot in both the Statistics and CS camps. I™ve spent most of my career in the Computer Science Department at the University of California, Davis, but I began my career in Statistics at that institution. My mathematics doctoral thesis at UCLA was in probability theory, and my first years on the faculty at Davis focused on statistical methodology. I was one of the  seven charter members of the Department of Statistics. Though my departmental affiliation later changed to CS, I never left Statistics as a field, and most of my research in Computer Science has been statistical in nature. With such dual loyalties, I™ll refer to people in both professions via third-person pronouns, not first, and I will be critical of both groups. However, in keeping with the theme of the ASA™s recent actions, my essay will be Stat-centric: What is poor Statistics to do?

Well then, how did CS come to annex the Stat field? The primary cause, I believe, came from the CS subfield of Artificial Intelligence (AI). Though there always had been some probabilistic analysis in AI, in recent years the interest has been almost exclusively in predictive analysis“a core area of Statistics.

That switch in AI was due largely to the emergence of Big Data. No one really knows what the term means, but people know it when they see it, and they see it quite often these days. Typical data sets range from large to huge to astronomical (sometimes literally the latter, as cosmology is one of the application fields), necessitating that one pay key attention to the computational aspects. Hence the term data science, combining quantitative methods with speedy computation, and hence another reason for CS to become involved.

Involvement is one thing, but usurpation is another. Though not a deliberate action by any means, CS is eclipsing Stat in many of Stat™s central areas. This is dramatically demonstrated by statements that are made like, With machine learning methods, you don™t need statistics“a punch in the gut for statisticians who realize that machine learning really IS statistics. ML goes into great detail in certain aspects, e.g. text mining, but in essence it consists of parametric and nonparametric curve estimation methods from Statistics, such as logistic regression, LASSO, nearest-neighbor classification, random forests, the EM algorithm and so on.

Though the Stat leaders seem to regard all this as something of an existential threat to the well-being of their profession, I view it as much worse than that. The problem is not that CS people are doing Statistics, but rather that they are doing it poorly: Generally the quality of CS work in Stat is weak. It is not a problem of quality of the researchers themselves; indeed, many of them are very highly talented. Instead, there are a number of systemic reasons for this, structural problems with the CS research business model:

CS, having grown out of a research on fast-changing software and hardware systems, became accustomed to the 24-hour news cycle“very rapid publication rates, with the venue of choice being (refereed) frequent conferences rather than slow journals. This leads to research work being less thoroughly conducted, and less thoroughly reviewed, resulting in poorer quality work. The fact that some prestigious conferences have acceptance rates in the teens or even lower doesn™t negate these realities.

Because CS Depts. at research universities tend to be housed in Colleges of Engineering, there is heavy pressure to bring in lots of research funding, and produce lots of PhD students. Large amounts of time is spent on trips to schmooze funding agencies and industrial sponsors, writing grants, meeting conference deadlines and managing a small army of doctoral students“instead of time spent in careful, deep, long-term contemplation about the problems at hand. This is made even worse by the rapid change in the fashionable research topic du jour. making it difficult to go into a topic in any real depth. Offloading the actual research onto a large team of grad students can result in faculty not fully applying the talents they were hired for; I™ve seen too many cases in which the thesis adviser is not sufficiently aware of what his/her students are doing.

There is rampant reinventing the wheel. The above-mentioned lack of adult supervision and lack of long-term commitment to research topics results in weak knowledge of the literature. This is especially true for knowledge of the Stat literature, which even the adults tend to have very little awareness of. For instance, consider a paper on the use of unlabeled training data in classification. (I™ll omit names.) One of the two authors is one of the most prominent names in the machine learning field, and the paper has been cited over 3,000 times, yet the paper cites nothing in the extensive Stat literature on this topic, consisting of a long stream of papers from 1981 to the present.

Again for historical reasons, CS research is largely empirical/experimental in nature. This causes what in my view is one of the most serious problems plaguing CS research in Stat “ lack of rigor. Mind you, I am not saying that every paper should consist of theorems and proofs or be overly abstract; data- and/or simulation-based studies are fine. But there is no substitute for precise thinking, and in my experience, many (nominally) successful CS researchers in Stat do not have a solid understanding of the fundamentals underlying the problems they work on. For example, a recent paper in a top CS conference incorrectly stated that the logistic classification model cannot handle non-monotonic relations between the predictors and response variable; actually, one can add quadratic terms, and so on, to models like this.

This engineering-style research model causes a cavalier attitude towards underlying models and assumptions. Most empirical work in CS doesn™t have any models to worry about. That™s entirely appropriate, but in my observation it creates a mentality that inappropriately carries over when CS researchers do Stat work. A few years ago, for instance, I attended a talk by a machine learning specialist who had just earned her PhD at one of the very top CS Departments. in the world. She had taken a Bayesian approach to the problem she worked on, and I asked her why she had chosen that specific prior distribution. She couldn™t answer “ she had just blindly used what her thesis adviser had given her“and moreover, she was baffled as to why anyone would want to know why that prior was chosen.

Again due to the history of the field, CS people tend to have grand, starry-eyed ambitions“laudable, but a double-edged sword. On the one hand, this is a huge plus, leading to highly impressive feats such as recognizing faces in a crowd. But this mentality leads to an oversimplified view of things, with everything being viewed as a paradigm shift. Neural networks epitomize this problem. Enticing phrasing such as Neural networks work like the human brain blinds many researchers to the fact that neural nets are not fundamentally different from other parametric and nonparametric methods for regression and classification.(Recently I was pleased to discover“learn, if you must“that the famous book by Hastie, Tibshirani and Friedman complains about what they call hype over neural networks; sadly, theirs is a rare voice on this matter.)  Among CS folks, there is a failure to understand that the celebrated accomplishments of machine learning have been mainly the result of applying a lot of money, a lot of people time, a lot of computational power and prodigious amounts of tweaking to the given problem “ not because fundamentally new technology has been invented.

All this matters “ a LOT. In my opinion, the above factors result in highly lamentable opportunity costs. Clearly, I™m not saying that people in CS should stay out of Stat research. But the sad truth is that the usurpation process is causing precious resources“research funding, faculty slots, the best potential grad students, attention from government policymakers, even attention from the press“to go quite disproportionately to CS, even though Statistics is arguably better equipped to make use of them. This is not a CS vs. Stat issue; Statistics is important to the nation and to the world, and if scarce resources aren™t being used well, it™s everyone™s loss.

Making Statistics Attractive to Students

This of course is an age-old problem in Stat. Let™s face it“the very word statistics sounds hopelessly dull. But I would argue that a more modern development is making the problem a lot worse “ the Advanced Placement (AP) Statistics courses in high schools.

Professor Xiao-Li Meng has written extensively about the destructive nature of AP Stat. He observed, Among Harvard undergraduates I asked, the most frequent reason for not considering a statistical major was a turn-off™ experience in an AP statistics course. That says it all, doesn™t it? And though Meng™s views predictably sparked defensive replies in some quarters, I™ve had exactly the same experiences as Meng in my own interactions with students. No wonder students would rather major in a field like CS and study machine learning“without realizing it is Statistics. It is especially troubling that Statistics may be losing the best and brightest students.

One of the major problems is that AP Stat is usually taught by people who lack depth in the subject matter. A typical example is that a student complained to me that his AP Stat teacher could not answer his question as to why it is customary to use n-1 rather than n in the denominator of s^2 , even though he had attended a top-quality high school in the heart of Silicon Valley. But even that lapse is really minor, compared to the lack among the AP teachers of the broad overview typically possessed by Stat professors teaching university courses, in terms of what can be done with Stat, what the philosophy is, what the concepts really mean and so on. AP courses are ostensibly college level, but the students are not getting college-level instruction. The teach to the test syndrome that pervades AP courses in general exacerbates this problem.

The most exasperating part of all this is that AP Stat officially relies on TI-83 pocket calculators as its computational vehicle. The machines are expensive, and after all we are living in an age in which R is free! Moreover, the calculators don™t have the capabilities of dazzling graphics and analyzing of nontrivial data sets that R provides “ exactly the kinds of things that motivate young people.

So, unlike the CS usurpation problem, whose solution is unclear, here is something that actually can be fixed reasonably simply. If I had my druthers, I would simply ban AP Stat, and actually, I am one of those people who would do away with the entire AP program.  Obviously, there are too many deeply entrenched interests for this to happen, but one thing that can be done for AP Stat is to switch its computational vehicle to R.

As noted, R is free and is multi platform, with outstanding graphical capabilities. There is no end to the number of data sets teenagers would find attractive for R use, say the Million Song Data Set.

As to a textbook, there are many introductions to Statistics that use R, such as Michael Crawley™s Statistics: An Introduction Using R, and Peter Dalgaard™s Introductory Statistics Using R. But to really do it right, I would suggest that a group of Stat professors collaboratively write an open-source text, as has been done for instance for Chemistry. Examples of interest to high schoolers should be used, say this engaging analysis on OK Cupid.

This is not a complete solution by any means. There still is the issue of AP Stat being taught by people who lack depth in the field, and so on. And even switching to R would meet with resistance from various interests, such as the College Board and especially the AP Stat teachers themselves.

But given all these weighty problems, it certainly would be nice to do something, right? Switching to R would be doable“and should be done.



Leave a Reply

Your email address will not be published. Required fields are marked *