Comments on the Editorial of “Machine Learning For Science and Society” issue

For whatever reason, I am more fascinated by the applied aspects of any research and Machine Learning (ML) is not an exception. While I use machine learning approaches in my work and studied basics during my masters (.. and on and off during my PhD now), I never found much information on what happens to all the hundreds of new algorithms proposed every year. How many of them actually get used by non-ML researchers working on some other problem? How many of them get used by others who want to solve some real-world problems?

I attended the Machine learning summer school in 2013, where, for two weeks, I was fortunate enough to listen to some of the best researchers in the field speak about ML in general and their work in particular. However, I got a feeling that the community is not so keen on a reality check about the applicability of these algorithms. So, basically, the questions remained.

Machine learning that matters” (Kiri Wagstaff, 2012) is an article I keep thinking about whenever this sort of discussion comes up with fellow grad-students. (My thoughts on it here). In the past few days, there have been a lot of short online/offline discussions about how an effort to do more evaluation on real-world scenarios/datasets is perceived by reviewers in various academic conferences (disclaimer: these discussions are not exclusively about ML but some of the people in these discussions happen to be grad-students working in ML).
We, with our own shortcomings and limitations drew some conclusions (which are not of interest to anyone perhaps) and I was reminded of another inspiring article that I thought about several times in the past few months.

The Article: Machine learning for science and society (Editorial)
Authors: Cynthia Rudin and Kiri L. Wagstaff
Details: Machine Learning (2014) 95:1–9
Url here

This article is an editorial for a special issue of Machine Learning Journal called “Machine Learning For Science and Society“. The issue is a collection of research papers that tackle some real life problems ranging from water pipe condition assessment to online-advertising through ML based approaches. While I did not go through all the papers in this edition yet, I think the editorial is worth a read to any person having a remote curiosity about the phrase “Machine Learning”.

It discusses the issues that arise when you decide to study the real-life impact of ML- What exactly counts as evaluation from the applied perspective? How much of this evaluation differs based on the application domain? How do domain experts see ML – do they look for a great model or a good model that is interpretable? How does the ML community see such research? What is ML good for? What is the need for this special focused issue at all? etc.,

I will not go on and on like this, but I would like to quote a few things from the paper, hoping its not a copyright violation.

The abstract:

“The special issue on “Machine Learning for Science and Society” showcases machine learning work with influence on our current and future society. These papers addressseveral key problems such as how we perform repairs on critical infrastructure, how we predict severe weather and aviation turbulence, how we conduct tax audits, whether we can detect privacy breaches in access to healthcare data, and how we link individuals across census data sets for new insights into population changes. In this introduction, we discuss the need for such a special issue within the context of our field and its relationship to the broader world. In the era of “big data,” there is a need for machine learning to address important large-scale applied problems, yet it is difficult to find top venues in machine learning where such work is encouraged. We discuss the ramifications of this contradictory situation and encourage further discussion on the best strategy that we as a field may adopt. We also summarize key lessons learned from individual papers in the special issue so that the community as a whole can benefit.”

Then, the four points starting from: “If applied research is not considered publishable in top ML venues, our field faces the following disadvantages:”

1. “We lose the flow of applied problems necessary for stimulating relevant theoretical work ….”
2. “We further exacerbate the gap between theoretical work and practice. …”
3. “We may prevent truly new applications of ML to be published in top venues at all (ML or not). …”
4. “We strongly discourage applied research by machine learning professionals. … “

(Read the relevant section in the paper for details.)

The paragraph that followed, where examples of a few applications of ML were mentioned:

“The editors of this special issue have worked on both theoretical and applied topics, where the applied topics between us include criminology (Wang et al. 2013), crop yield prediction (Wagstaff et al. 2008), the energy grid (Rudin et al. 2010, 2012), healthcare (Letham et al. 2013b; McCormick et al. 2012), information retrieval (Letham et al. 2013a), interpretable models (Letham et al. 2013b; McCormick et al. 2012; Ustun et al. 2013), robotic space exploration (Castano et al. 2007; Wagstaff and Bornstein 2009; Wagstaff et al. 2013b), and scientific discovery (Wagstaff et al. 2013a).”

Last, but not the least, the comments on inter-disciplinary research just had such an amount of resounding truth in them that I put the quote up in my room and a few others did the same in the inter-disciplinary grad school I am a part of. :-)

“..for a true interdisciplinary collaboration, both sides need to understand each other’s specialized terminology and together develop the definition of success for the project. We ourselves must be willing to acquire at least apprentice-level expertise in the domain at hand to develop the data and knowledge discovery process necessary for achieving success. ”

This has been one of those articles which I thought about again and again… kept recommending to people working in areas as diverse as psychology, sociology, computer science etc., to people who are not into academic research at all! :-) (I wonder what these people think of me for sending the “seemingly unrelated” article to read though.)

*****
P.S.: It so happens that an ML article inspired me to write this post. But, on a personal front, the questions posed in the first paragraph remain the same even for my own field of research – Computational Linguistics and perhaps to any other field too.

P.S. 2: This does not mean I have some fantastic solution to solve the dilemmas of all senior researchers and grad students who are into inter-disciplinary and/or applied research and at the same time don’t want to perish since they can’t publish in the conferences/journals of their main field.

Published in: on July 8, 2014 at 3:15 pm  Leave a Comment  

Notes from EACL2014

(This is a note taking post. It may not be of particular interest to anyone)

***

I was at EACL 2014 this week, in Gothenburg, Sweden. I am yet to give a detailed reading to most of the papers that interested me, but I thought its a good idea to list down things.

I attended the PITR workshop and noticed that there are more number of interested people both in the authors and audience compared to last year. Despite the inconclusive panel discussion, I found the whole event interesting and stimulating primarily because of the diversity of topics presented. There seems to be an increasing interest in performing eye-tracking experiments for this task. Some papers that particularly interested me:

One Step Closer to Automatic Evaluation of Text Simplification Systems by Sanja Štajner, Ruslan Mitkov and Horacio Saggion

An eye-tracking evaluation of some parser complexity metrics – Matthew J. Green

Syntactic Sentence Simplification for FrenchLaetitia Brouwers, Delphine Bernhard, Anne-Laure Ligozat and Thomas Francois

An Open Corpus of Everyday Documents for Simplification TasksDavid Pellow and Maxine Eskenazi

An evaluation of syntactic simplification rules for people with autism - Richard Evans, Constantin Orasan and Iustin Dornescu

(If anyone came till here and is interested in any of these papers, they are all open-access and can be found online by searching with the name)

 

Moving on to the main conference papers,  I am listing here everything that piqued my interest, right from papers I know only by titles for the moment to those for which I heard the authors talk about the work.

Parsing, Machine Translation etc.,

* Is Machine Translation Getting Better over Time? - Yvette Graham; Timothy Baldwin; Alistair Moffat; Justin Zobel

* Improving Dependency Parsers using Combinatory Categorial Grammar-Bharat Ram Ambati; Tejaswini Deoskar; Mark Steedman

* Generalizing a Strongly Lexicalized Parser using Unlabeled Data- Tejaswini Deoskar; Christos Christodoulopoulos; Alexandra Birch; Mark Steedman

* Special Techniques for Constituent Parsing of Morphologically Rich Languages – Zsolt Szántó; Richárd Farkas

* The New Thot Toolkit for Fully-Automatic and Interactive Statistical Machine Translation- Daniel Ortiz-Martínez; Francisco Casacuberta

* Joint Morphological and Syntactic Analysis for Richly Inflected Languages – Bernd Bohnet, Joakim Nivre, Igor Bogulavsky, Richard Farkas, Filip Ginter and Jan Hajic

* Fast and Accurate Unlexicalized parsing via Structural Annotations – Maximilian Schlund, Michael Luttenberger and Javier Esparza

Information Retrieval, Extraction stuff:

* Temporal Text Ranking and Automatic Dating of Text – Vlad Niculae; Marcos Zampieri; Liviu Dinu; Alina Maria Ciobanu

* Easy Web Search Results Clustering: When Baselines Can Reach State-of-the-Art Algorithms – Jose G. Moreno; Gaël Dias

Others:

* Now We Stronger than Ever: African-American English Syntax in Twitter- Ian Stewart

* Chinese Native Language Identification – Shervin Malmasi and Mark Dras

* Data-driven language transfer hypotheses – Ben Swanson and Eugene Charniak

* Enhancing Authorship Attribution by utilizing syntax tree profiles – Michael Tschuggnall and Günter Specht

* Machine reading tea leaves: Automatically Evaluating Topic Coherence and Topic model quality by Jey Han Lau, David Newman and Timothy Baldwin

* Identifying fake Amazon reviews as learning from crowds – Tommaso Fornaciari and Massimo Poesio

* Using idiolects and sociolects to improve word predictions – Wessel Stoop and Antal van den Bosch

* Expanding the range of automatic emotion detection in microblogging text – Jasy Suet Yan Liew

* Answering List Questions using Web as Corpus – Patricia Gonçalves; Antonio Branco

* Modeling unexpectedness for irony detection in twitter – Francesco Barbieri and Horacio Saggion

* SPARSAR: An Expressive Poetry reader – Rodolfo Delmonte and Anton Maria Prati

* Redundancy detection in ESL writings – Huichao Xue and Rebecca Hwa

* Hybrid text simplification using synchronous dependency grammars with hand-written and automatically harvested rules – Advaith Siddharthan and Angrosh Mandya

* Verbose, Laconic or Just Right: A Simple Computational Model of Content Appropriateness under length constraints – Annie Louis and Ani Nenkova

* Automatic Detection and Language Identification of Multilingual Document – Marco Lui, Jey Han Lau and Timothy Baldwin

Now, in the coming days, I should atleast try to read the intros and conclusions of some of these papers. :-)

Published in: on May 2, 2014 at 3:10 pm  Leave a Comment  
Tags:

“Linguistically Naive != Language Independent” and my soliloquy

This post is about a paper that I read today (which inspired me to write a real blog post after months!)

The paper: Linguistically Naive!= Language Independent: Why NLP Needs Linguistic Typology
Author: Emily Bender
Proceedings of the EACL 2009 Workshop on the Interaction between Linguistics and Computational Linguistics, pages 26–32. ACL.

In short, this is a position paper, that argues that incorporating linguistic knowledge is a must if we want to create truly language independent NLP systems. Now, on the surface, that looks like a contradictory statement. Well, it isn’t ..and it is common sense, in.. er..some sense ;)

So, time for some background: an NLP algorithm that offers a solution to some problem is called language independent if that approach can work for any other language apart from the language for which it was initially developed. One common example can be Google Translate. It is a practical example of how an approach can work across multiple language pairs (with varying efficiencies ofcourse, but that is different). The point of these language independent approaches is that, in theory, you can just apply the algorithm on any language as long as you have the relevant data about that language. However, typically, such approaches in contemporary research eliminate any linguistic knowledge in their modeling and there by make it “language” independent.

Now, what the paper argues for is clear from the title – “linguistically naive != language independent”.

I liked the point made in section-2, where in some cases, the surface appearance of language independence is actually a hidden language dependence. The specific example of ngrams and how efficiently they work, albeit for languages with certain kind of properties, and the claim of language independence – that nailed down the point. Over a period of time, I became averse to the idea of using n-grams for each and every problem, as I thought this is not giving any useful insights neither from a linguistic nor from a computational perspective (This is my personal opinion). However, although I did think of this language dependent aspect of n-grams, I never clearly put it this way and I just accepted that “language independence” claim. Now, this paper changed that acceptance. :-)

One good thing about this paper is that it does not stop there. It also explains about approaches that use language modeling but does slightly more than ngrams to accommodate various types of languages (factored language models) and also talks about how a “one size fits all” approach won’t work. There is this gem of a statement:

“A truly language independent system works equally well across languages. When a system that is meant to be language independent does not in fact work equally well across languages, it is likely because something about the system design is making implicit assumptions about language structure. These assumptions are typically the result of “overfitting” to the original development language(s).”

Now, there is this section on language independence claims and representation of languages belonging to various families in the papers of ACL 2008. This concludes saying:
“Nonetheless, to the extent that language independence is an important goal, the field needs to improve both its testing of language independence and its sampling of languages to test against.”

Finally, the paper talks about one form of linguistic knowledge that can be incorporated in linguistic systems – linguistic typology and gives pointers to some useful resources and relevant research in this direction.

And I too conclude the post with the two main points that I hope people noticed in the research community:

(1) “This paper has briefly argued that the best way to create language-independent systems is to include linguistic knowledge, specifically knowledge about the ways in which languages vary in their structure. Only by doing so can we ensure that our systems are not overfitted to the development languages.”

(2) “Finally, if the field as a whole values language independence as a property of NLP systems, then we should ensure that the languages we select to use in evaluations are representative of both the language types and language families we are interested in.”

Good paper and considerable amount of food for thought! These are important design considerations, IMHO.

The extended epilogue:

At NAACL-2012, there was this tutorial titled “100 Things You Always Wanted to Know about Linguistics But Were Afraid to Ask“, by Emily Bender. At that time, although I in theory could have attended the conference, I could not, as I had to go to India. But, this was one tutorial that caught my attention with its name and description and I really wanted to attend it.

Thanks to a colleague who attended, I managed to see the slides of the tutorial (which I later saw on the professor’s website). Last week, during some random surfing, I realized that an elaborate version was released as a book:

Linguistic Fundamentals for Natural Language Processing: 100 Essentials from Morphology and Syntax
by Emily Bender
Pub: Synthesis Lectures on Human Language Technologies, Morgan and Claypool Publishers

I happily borrowed the book using the inter-library loan and it traveled for a few days and reached me from somewhere in Lower Saxony to here in Baden-Württemburg. Just imagine, it travelled all the way just for my sake! ;) :P

So, I started to go through the book. I, even in the days of absolute lack of any basic knowledge on this field, always felt that natural language processing should involve some form of linguistic modeling by default. However, most of the successful so-called “language independent” approaches (some of which also became the products we use regularly, like Google Translate and Transliterate) never speak about such linguistic modeling (atleast, not many that I read).

There is also this Norvig vs Chomsky debate, about which I keep getting reminded of when I think of this topic. (Neither of them are wrong in my view but that is not the point here.)

In this context, I found the paper particularly worth sharing. Anyway, I perhaps should end the post. While reading the introductory parts of Emily Bender’s book, I found a reference to the paper, and this blog post came out of that reading experience.

Published in: on January 23, 2014 at 5:04 pm  Comments (2)  
Tags:

Notes from ACL

This is the kind of post that would not interest anyone else except me perhaps. I was at ACL (in a very interesting city called Sofia, the capital of Bulgaria) last week and I am still in the process of making some notes on the papers that interested me, abstracts that raised my curiosity, short and long term interest topics etc. I thought its probably a better idea to arrange atleast the titles in some subgroups and save somewhere so that it would be easy for me to get back later. I did not read all of them completely. Infact, for a few of them, I did not even go beyond the abstract. So, don’t ask me questions. Anyone who is interested in any of these titles can either read them by googling for them or visit the ACL anthology page for ACL’13 and find the pdfs there.

The first two sections below are my current topics of interest. The third one is a general topic of interest. The fourth one includes everything else…that piqued my interest. Fifth section is on teaching CL/NLP…which is also a long term interest topic for me. The final section is about workshops as a whole that I have interest in.

*****

Various dimensions of the notion of text difficulty, readability
* Automatically predicting sentence translation difficulty – Mishra and Bhattacharya
* Automatic detection of deception in child produced speech using syntactic complexity features – Yancheva and Rudzicz
* Simple, readable sub sentences – Klerke and Sogaard
* Improving text simplification language modeling using Unsimplified text data – Kauchak
* Typesetting for improved readability using lexical and syntactic information – Salama et.al.
* What makes writing great?: First experiments on Article quality prediction in the science journalism domain, Louis and Nenkova
* Word surprisal predicts N400 amplitude during reading – Frank et.al.
* An analysis of memory based processing costs using incremental deep syntactic dependency parsing – Schjindel et.al.

Language Learning, Assessment etc.
* Discriminative Approach to fill-in-the-blank quiz generation for language learners
* Modeling child divergences from Adult Grammar with Automatic Error Correction
* Automated collocation suggestion for Japanese second language learners
* Reconstructing an Indo-European family tree from non-native English texts
* Word association profiles and their use for automated scoring of essays -Klebanov and Flor.
* Grammatical error correction using Integer Linear programming
* A learner corpus based approach to verb suggestion for ESL
* Modeling thesis clarity in student essays – Persing & Ng
* Computerized analysis of a verbal fluency test – Szumlanski et.al.
* Exploring word class n-grams to measure language development in children. Ramirez-de-la-Rosa et.al.

NLP for other languages:
* Sorani Kurdish versus Kurmanji Kurdish: An Empirical Comparison – Esmaili and Salavati
* Identifying English and Hungarian light verb constructions: A contrastive approach – Vincze et.al
* Real-world semi-supervised learning of POS taggers for low-resource languages -Garrette et.al.
* Learning to lemmatize Polish noun phrases – Radziszewski
* Sentence level dialect identification in Arabic – Elfardy and Diab

Others:
* Exploring Word Order Universals: a probabilisitic graphical model approach – Xia Lu.
* An opensource toolkit for quantitative historical linguists
* SORT: An improved source rewriting tool for improved translation
* unsupervised consonant-vowel prediction over hundreds of languages
* Linguistic models for analyzing and detecting biased language.
* Earlier identification of Epilepsy surgery candidates using natural language processing – Matykiewicz et.al.
* Parallels between linguistics and biology. Chakraborti and Tendulkar
* Analysing lexical consistency in translation – Guillou
* Associative texture is lost in translation – Klebanov and Flor

Teaching CL, NLP:
* Artificial IntelliDance: Teaching Machine learning through choreography, Agarwal and Trainor
* Treebanking for data-driven research in the classroom, Lee et.al.
* Learning computational linguistics through NLP evaluation events: the experience of Russian evaluation initiative. Bonch-Osmolovskaya et.al.
* Teaching the basics of NLP and ML in an introductory course to Information Science. Agarwal.

whole workshops and competitions:
* Shared task on quality estimation in Machine translation
* Predicting and improving textual readability for target reader populations (PITR 2013)

Published in: on August 14, 2013 at 9:09 am  Leave a Comment  
Tags:

Machine Learning that Matters – Some thoughts.

Its almost an year since Praneeth sent this paper and I read it…and began blogging about it. I began re-reading it today, as a part of my “evaluating the evaluation” readings, and thought I still have something to say (largely to myself) on some of the points mentioned in this paper.

Machine Learning that Matters
by Kiri L. Wagstaff
Published in proceedings of ICML 2012.

This is how it begins:


“Much of current machine learning (ML) research has lost its connection to problems of import to the larger world of science and society”

-I guess the tone and intention of this paper is pretty clear in this first sentence.

I don’t have any issues with the tone as such – but I thought there are so many real-world applications of machine learning these days! That doesn’t mean that every machine learning research problem leads to solving a real-world problem though, which holds good for any research. So, the above statement in my view can apply to any research in general.

I was fascinated by this statistics on the hyper-focus on bench marked datasets.

A survey of the 152 non-cross-conference papers published at ICML 2011 reveals:
148/152 (93%) include experiments of some sort
57/148 (39%) use synthetic data
55/148 (37%) use UCI data
34/148 (23%) use ONLY UCI and/or synthetic data
1/148 (1%) interpret results in domain context

-Since I am not into machine learning research but only use ML for computational linguistics problems, I found this to be very interesting… and a very valid point.

Then, the discussion moves on to evaluation metrics:

“These metrics are abstract in that they explicitly ignore or remove problem-specific details, usually so that numbers can be compared across domains. Does this seemingly obvious strategy provide us with useful information?”

-In the discussion that followed, there were some interesting points on what various evaluation metrics fail to capture etc. I have been reading on this topic of evaluation metrics for supervised machine learning in the recent past…and like with those, I am left with the same question even here – what is the best evaluation, then? Ofcourse, “real world”. But, how do you quantify that? How can there be some kind of evaluation metric, thats truly comparable with other peer research groups?

I got my answer in the later part of the paper:

Yet (as noted earlier) the common approach of using the same metric for all domains relies on an unstated, and usually unfounded, assumption that it is possible to equate an x% improvement in one domain with that in another. Instead, if the same method can yield profit improvements of $10,000 per year for an auto-tire business as well as the avoidance of 300 unnecessary surgical interventions per year, then it will have demonstrated a powerful, wide-ranging utility.

Next part of the discussion is on identifying where machine learning matters:

“It is very hard to identify a problem for which machine learning may offer a solution, determine what data should be collected, select or extract relevant features, choose an appropriate learning method, select an evaluation method, interpret the results, involve domain experts, publicize the results to the relevant scientific community, persuade users to adopt the technique, and (only then) to truly have made a difference”

-Now, I like that. :-) :-)

I also liked this point on the involvement of the world outside ML.

“We could also solicit short “Comment” papers, to accompany the publication of a new ML advance, that are authored by researchers with relevant domain expertise but who were uninvolved with the ML research. They could provide an independent assessment of the performance, utility, and impact of the work. As an additional benefit, this informs new communities about how, and how well, ML methods work.”

“Finally, we should consider potential impact when selecting which research problems to tackle, not merely how interesting or challenging they are from the ML perspective. How many people, species, countries, or square meters would be impacted by a solution to the problem? What level of performance would constitute a meaningful improvement over the status quo?”

-Well, I personally share the sentiments expressed here. I like and I want to work on problems whose solutions can possibly have a real life impact. However, I consider it my personal choice. But, I don’t understand what is wrong in doing something because its challenging! What’s wrong in researching for fact finding? There will be practical implications to certain research problems. There might not be an immediate impact for some. There might not be a direct impact for some. There might not really be a practical impact for some. But should that be the only deciding factor? (Well, of course, when the researchers are funded from public taxes, perhaps its expected to be thus. But, should it be thus, always??)

I found the six old and new Machine learning impact challenges really interesting.
Here are the new ones from the paper:

1. A law passed or legal decision made that relies on the result of an ML analysis.
2. $100M saved through improved decision making provided by an ML system.
3. A conflict between nations averted through high-quality translation provided by an ML system.
4. A 50% reduction in cybersecurity break-ins through ML defenses.
5. A human life saved through a diagnosis or intervention recommended by an ML system.
6. Improvement of 10% in one country’s Human Development Index (HDI) (Anand & Sen,1994) attributable to an ML system.

And finally, I found the last discussion on obstacles to ML impact also to be very true. I don’t know why there is so little work making machine learning output comprehensible to its users (e.g., doctors using a classifier to identify certain traits in a patient might not really want to see an SVM output and take a decision without understanding the output!) (atleast, I did not find too much work on Human Comprehensible Machine Learning)

As I read it again and again, this paper seems to me like a Theory vs Practice debate (generally speaking) and can possibly be worth reading for anyone outside machine learning community too (like it was useful for me!).

******
End disclaimer: All those thoughts expressed are my individual feelings and are not related to my employer.:-)

Published in: on March 26, 2013 at 12:35 pm  Comments (20)  
Follow

Get every new post delivered to your Inbox.

Join 89 other followers