Comments on the Editorial of “Machine Learning For Science and Society” issue

For whatever reason, I am more fascinated by the applied aspects of any research and Machine Learning (ML) is not an exception. While I use machine learning approaches in my work and studied basics during my masters (.. and on and off during my PhD now), I never found much information on what happens to all the hundreds of new algorithms proposed every year. How many of them actually get used by non-ML researchers working on some other problem? How many of them get used by others who want to solve some real-world problems?

I attended the Machine learning summer school in 2013, where, for two weeks, I was fortunate enough to listen to some of the best researchers in the field speak about ML in general and their work in particular. However, I got a feeling that the community is not so keen on a reality check about the applicability of these algorithms. So, basically, the questions remained.

Machine learning that matters” (Kiri Wagstaff, 2012) is an article I keep thinking about whenever this sort of discussion comes up with fellow grad-students. (My thoughts on it here). In the past few days, there have been a lot of short online/offline discussions about how an effort to do more evaluation on real-world scenarios/datasets is perceived by reviewers in various academic conferences (disclaimer: these discussions are not exclusively about ML but some of the people in these discussions happen to be grad-students working in ML).
We, with our own shortcomings and limitations drew some conclusions (which are not of interest to anyone perhaps) and I was reminded of another inspiring article that I thought about several times in the past few months.

The Article: Machine learning for science and society (Editorial)
Authors: Cynthia Rudin and Kiri L. Wagstaff
Details: Machine Learning (2014) 95:1–9
Url here

This article is an editorial for a special issue of Machine Learning Journal called “Machine Learning For Science and Society“. The issue is a collection of research papers that tackle some real life problems ranging from water pipe condition assessment to online-advertising through ML based approaches. While I did not go through all the papers in this edition yet, I think the editorial is worth a read to any person having a remote curiosity about the phrase “Machine Learning”.

It discusses the issues that arise when you decide to study the real-life impact of ML- What exactly counts as evaluation from the applied perspective? How much of this evaluation differs based on the application domain? How do domain experts see ML – do they look for a great model or a good model that is interpretable? How does the ML community see such research? What is ML good for? What is the need for this special focused issue at all? etc.,

I will not go on and on like this, but I would like to quote a few things from the paper, hoping its not a copyright violation.

The abstract:

“The special issue on “Machine Learning for Science and Society” showcases machine learning work with influence on our current and future society. These papers addressseveral key problems such as how we perform repairs on critical infrastructure, how we predict severe weather and aviation turbulence, how we conduct tax audits, whether we can detect privacy breaches in access to healthcare data, and how we link individuals across census data sets for new insights into population changes. In this introduction, we discuss the need for such a special issue within the context of our field and its relationship to the broader world. In the era of “big data,” there is a need for machine learning to address important large-scale applied problems, yet it is difficult to find top venues in machine learning where such work is encouraged. We discuss the ramifications of this contradictory situation and encourage further discussion on the best strategy that we as a field may adopt. We also summarize key lessons learned from individual papers in the special issue so that the community as a whole can benefit.”

Then, the four points starting from: “If applied research is not considered publishable in top ML venues, our field faces the following disadvantages:”

1. “We lose the flow of applied problems necessary for stimulating relevant theoretical work ….”
2. “We further exacerbate the gap between theoretical work and practice. …”
3. “We may prevent truly new applications of ML to be published in top venues at all (ML or not). …”
4. “We strongly discourage applied research by machine learning professionals. … “

(Read the relevant section in the paper for details.)

The paragraph that followed, where examples of a few applications of ML were mentioned:

“The editors of this special issue have worked on both theoretical and applied topics, where the applied topics between us include criminology (Wang et al. 2013), crop yield prediction (Wagstaff et al. 2008), the energy grid (Rudin et al. 2010, 2012), healthcare (Letham et al. 2013b; McCormick et al. 2012), information retrieval (Letham et al. 2013a), interpretable models (Letham et al. 2013b; McCormick et al. 2012; Ustun et al. 2013), robotic space exploration (Castano et al. 2007; Wagstaff and Bornstein 2009; Wagstaff et al. 2013b), and scientific discovery (Wagstaff et al. 2013a).”

Last, but not the least, the comments on inter-disciplinary research just had such an amount of resounding truth in them that I put the quote up in my room and a few others did the same in the inter-disciplinary grad school I am a part of. :-)

“..for a true interdisciplinary collaboration, both sides need to understand each other’s specialized terminology and together develop the definition of success for the project. We ourselves must be willing to acquire at least apprentice-level expertise in the domain at hand to develop the data and knowledge discovery process necessary for achieving success. ”

This has been one of those articles which I thought about again and again… kept recommending to people working in areas as diverse as psychology, sociology, computer science etc., to people who are not into academic research at all! :-) (I wonder what these people think of me for sending the “seemingly unrelated” article to read though.)

*****
P.S.: It so happens that an ML article inspired me to write this post. But, on a personal front, the questions posed in the first paragraph remain the same even for my own field of research – Computational Linguistics and perhaps to any other field too.

P.S. 2: This does not mean I have some fantastic solution to solve the dilemmas of all senior researchers and grad students who are into inter-disciplinary and/or applied research and at the same time don’t want to perish since they can’t publish in the conferences/journals of their main field.

Published in: on July 8, 2014 at 3:15 pm  Leave a Comment  

Notes from EACL2014

(This is a note taking post. It may not be of particular interest to anyone)

***

I was at EACL 2014 this week, in Gothenburg, Sweden. I am yet to give a detailed reading to most of the papers that interested me, but I thought its a good idea to list down things.

I attended the PITR workshop and noticed that there are more number of interested people both in the authors and audience compared to last year. Despite the inconclusive panel discussion, I found the whole event interesting and stimulating primarily because of the diversity of topics presented. There seems to be an increasing interest in performing eye-tracking experiments for this task. Some papers that particularly interested me:

One Step Closer to Automatic Evaluation of Text Simplification Systems by Sanja Štajner, Ruslan Mitkov and Horacio Saggion

An eye-tracking evaluation of some parser complexity metrics – Matthew J. Green

Syntactic Sentence Simplification for FrenchLaetitia Brouwers, Delphine Bernhard, Anne-Laure Ligozat and Thomas Francois

An Open Corpus of Everyday Documents for Simplification TasksDavid Pellow and Maxine Eskenazi

An evaluation of syntactic simplification rules for people with autism – Richard Evans, Constantin Orasan and Iustin Dornescu

(If anyone came till here and is interested in any of these papers, they are all open-access and can be found online by searching with the name)

 

Moving on to the main conference papers,  I am listing here everything that piqued my interest, right from papers I know only by titles for the moment to those for which I heard the authors talk about the work.

Parsing, Machine Translation etc.,

* Is Machine Translation Getting Better over Time? – Yvette Graham; Timothy Baldwin; Alistair Moffat; Justin Zobel

* Improving Dependency Parsers using Combinatory Categorial Grammar-Bharat Ram Ambati; Tejaswini Deoskar; Mark Steedman

* Generalizing a Strongly Lexicalized Parser using Unlabeled Data- Tejaswini Deoskar; Christos Christodoulopoulos; Alexandra Birch; Mark Steedman

* Special Techniques for Constituent Parsing of Morphologically Rich Languages – Zsolt Szántó; Richárd Farkas

* The New Thot Toolkit for Fully-Automatic and Interactive Statistical Machine Translation- Daniel Ortiz-Martínez; Francisco Casacuberta

* Joint Morphological and Syntactic Analysis for Richly Inflected Languages – Bernd Bohnet, Joakim Nivre, Igor Bogulavsky, Richard Farkas, Filip Ginter and Jan Hajic

* Fast and Accurate Unlexicalized parsing via Structural Annotations – Maximilian Schlund, Michael Luttenberger and Javier Esparza

Information Retrieval, Extraction stuff:

* Temporal Text Ranking and Automatic Dating of Text – Vlad Niculae; Marcos Zampieri; Liviu Dinu; Alina Maria Ciobanu

* Easy Web Search Results Clustering: When Baselines Can Reach State-of-the-Art Algorithms – Jose G. Moreno; Gaël Dias

Others:

* Now We Stronger than Ever: African-American English Syntax in Twitter- Ian Stewart

* Chinese Native Language Identification – Shervin Malmasi and Mark Dras

* Data-driven language transfer hypotheses – Ben Swanson and Eugene Charniak

* Enhancing Authorship Attribution by utilizing syntax tree profiles – Michael Tschuggnall and Günter Specht

* Machine reading tea leaves: Automatically Evaluating Topic Coherence and Topic model quality by Jey Han Lau, David Newman and Timothy Baldwin

* Identifying fake Amazon reviews as learning from crowds – Tommaso Fornaciari and Massimo Poesio

* Using idiolects and sociolects to improve word predictions – Wessel Stoop and Antal van den Bosch

* Expanding the range of automatic emotion detection in microblogging text – Jasy Suet Yan Liew

* Answering List Questions using Web as Corpus – Patricia Gonçalves; Antonio Branco

* Modeling unexpectedness for irony detection in twitter – Francesco Barbieri and Horacio Saggion

* SPARSAR: An Expressive Poetry reader – Rodolfo Delmonte and Anton Maria Prati

* Redundancy detection in ESL writings – Huichao Xue and Rebecca Hwa

* Hybrid text simplification using synchronous dependency grammars with hand-written and automatically harvested rules – Advaith Siddharthan and Angrosh Mandya

* Verbose, Laconic or Just Right: A Simple Computational Model of Content Appropriateness under length constraints – Annie Louis and Ani Nenkova

* Automatic Detection and Language Identification of Multilingual Document – Marco Lui, Jey Han Lau and Timothy Baldwin

Now, in the coming days, I should atleast try to read the intros and conclusions of some of these papers. :-)

Published in: on May 2, 2014 at 3:10 pm  Leave a Comment  
Tags:

On Openmindedness

On an impulse, I started looking at the issues of a journal called Educational Researcher. I just started looking (just looking) at all the titles of all articles since 1972. One of the titles I found was: “On the Nature of Educational Research” and these were the concluding remarks from that article.

“Openmindedness is not empty mindedness, however, and it is not tolerance of all views good or bad. It is having a sincere concern for truth and a willingness to consider, test, argue and revise on the basis of evidence our own and others’ claims in a reasonable and fair manner (Hare, 1979). This doesn’t mean that we will always reach agreement, or even that we will always be able to understand and appreciate the arguments of others, or that we cannot be committed to a position of our own. Openmindedness only requires a sincere attempt to consider the merits of other views and their claims. It does not release us from exercising judgement.”

From: “On the Nature of Educational Research” by Jonas F.Soltis. Educational Researcher. 1984. 13 (5)
If anyone has access, it could be read here.

The Hare, 1979 referred in this quote is this.

I wonder if the quote is only valid for that context of education!

Published in: on April 15, 2014 at 1:03 pm  Comments (1)  

Significant peace

Now, the amount of mental peace I felt after reading this (even if it is just for a few moments), makes it inevitable that I should drop a line or two about it in my blog :-) Even if its momentary, I don’t consider the peace as random or arbitrary. I consider it significant ;-).

The questions on the use of statistical significance for large datasets have been bugging me for sometime now although I never really did anything about it. The questions only kept getting back more and more frequently. Especially each time a reviewer asked about significance tests, I wondered – “Won’t everything become significantly different if you have a large N?”. As the perennial fledgling researcher, although, my first instinct is to doubt my own understanding of the process.

I came across this piece “Language is never, ever, ever, random” by Adam Kilgariff, which brought me some mental peace in what is (in my imagination) one of the very confusing phases of my life at the moment :-)

Here are the details of the paper:
Language is never, ever, ever, random
by Adam Kilgariff
Corpus Linguistics and Linguistic Theory 1-2 (2005), 263-276

The abstract:
“Language users never choose words randomly, and language is essentially non-random. Statistical hypothesis testing uses a null hypothesis, which posits randomness. Hence, when we look at linguistic phenomena in corpora, the null hypothesis will never be true. Moreover, where there is enough data, we shall (almost) always be able to establish that it is not true. In
corpus studies, we frequently do have enough data, so the fact that a relation between two phenomena is demonstrably non-random, does not support the inference that it is not arbitrary. We present experimental evidence of how arbitrary associations between word frequencies and corpora are systematically non-random. We review literature in which hypothesis testing has been used, and show how it has often led to unhelpful or misleading results.”

And the take home message (acc. to me):
Hypothesis testing has been used to reach conclusions, where the difficulty in reaching a conclusion is caused by sparsity of data. But language data, in this age of information glut, is available in vast quantities. A better strategy will generally be to use more data Then the difference between the motivated and the arbitrary will be evident without the use of compromised hypothesis testing. As Lord Rutherford put it: “If your experiment needs statistics, you ought to have done a better experiment.”

Published in: on March 4, 2014 at 11:50 am  Leave a Comment  

The Stronger – August Strindberg

Persona” was the first Ingmar Bergman movie I watched, in mid-2008 or so. Since then, I watched a couple of his movies, read some of his writings, reached Strindberg from him in the past few years. However, “Persona” remained the most intriguing movie, although its not my favorite Bergman movie. Although I don’t think I understand the movie, it was the one that raised my curiosity about Bergman as a writer and set me on the path of watching his other movies. While listening to the lectures on Bergman in Scandinavian Film and Television course on coursera, I learnt that Strindberg’s one-act play, “The Stronger” was an inspiration for “Persona”.

[The word “inspiration” is very different from “copy”. Both the play and the movie are independent entities and are equally worth checking out. I personally would consider Persona to be a much more complex psychological drama and its much longer.]

Now, “The Stronger” did not particularly fascinate me. But it is hard to not think about the characters and about their possible interpretations, after reading the play. Its short, very short, but has its impact on the reader nevertheless. I will not say anything more, but will quote something that I read again and again in the play (No, not because I don’t understand English – but because the characters came alive in front of my eyes when I read the monologue).

“Everything, everything came from you to me, even your passions. Your soul crept into mine, like a worm into an apple, ate and ate, bored and bored, until nothing was left but the rind and a little black dust within. I wanted to get away from you, but I couldn’t; you lay like a snake and charmed me with your black eyes; I felt that when I lifted my wings they only dragged me down; I lay in the water with bound feet, and the stronger I strove to keep up the deeper I worked myself down, down, until I sank to the bottom, where you lay like a giant crab to clutch me in your claws–and there I am lying now.

I hate you, hate you, hate you! And you only sit there silent–silent and indifferent; indifferent whether it’s new moon or waning moon, Christmas or New Year’s, whether others are happy or unhappy; without power to hate or to love; as quiet as a stork by a rat hole–you couldn’t scent your prey and capture it, but you could lie in wait for it! “

Here is an interesting analysis of the play.

A few months back, I bought “Persona”‘s screenplay and found a pdf of critical essays on Persona. Perhaps, its time to start reading them soon! :-)

Published in: on February 23, 2014 at 1:16 pm  Leave a Comment  

Questions to Mother Nature

I wrote this in May 2013, wondering how long will those cold days last. Looks like its time to complain about not having snow this time! I wonder if there will be a time when I won’t complain! ;)

****

Mother nature, mother nature,
May I be so bold –
and call you cold
for making our May so cold?

I knew you had a heart of gold
You forgave us since time old
Now, is your temper losing its hold?
is that what is being told?

Or is this just your way to scold
your problem children, the man-fold?
I know, your fury was foretold
and perhaps, we should never be cajold

But, Mother nature, mother nature
it hurts, this cold
have some mercy rolled
and please, let some warmth be doled!

Published in: on February 2, 2014 at 1:10 pm  Comments (4)  

“Linguistically Naive != Language Independent” and my soliloquy

This post is about a paper that I read today (which inspired me to write a real blog post after months!)

The paper: Linguistically Naive!= Language Independent: Why NLP Needs Linguistic Typology
Author: Emily Bender
Proceedings of the EACL 2009 Workshop on the Interaction between Linguistics and Computational Linguistics, pages 26–32. ACL.

In short, this is a position paper, that argues that incorporating linguistic knowledge is a must if we want to create truly language independent NLP systems. Now, on the surface, that looks like a contradictory statement. Well, it isn’t ..and it is common sense, in.. er..some sense ;)

So, time for some background: an NLP algorithm that offers a solution to some problem is called language independent if that approach can work for any other language apart from the language for which it was initially developed. One common example can be Google Translate. It is a practical example of how an approach can work across multiple language pairs (with varying efficiencies ofcourse, but that is different). The point of these language independent approaches is that, in theory, you can just apply the algorithm on any language as long as you have the relevant data about that language. However, typically, such approaches in contemporary research eliminate any linguistic knowledge in their modeling and there by make it “language” independent.

Now, what the paper argues for is clear from the title – “linguistically naive != language independent”.

I liked the point made in section-2, where in some cases, the surface appearance of language independence is actually a hidden language dependence. The specific example of ngrams and how efficiently they work, albeit for languages with certain kind of properties, and the claim of language independence – that nailed down the point. Over a period of time, I became averse to the idea of using n-grams for each and every problem, as I thought this is not giving any useful insights neither from a linguistic nor from a computational perspective (This is my personal opinion). However, although I did think of this language dependent aspect of n-grams, I never clearly put it this way and I just accepted that “language independence” claim. Now, this paper changed that acceptance. :-)

One good thing about this paper is that it does not stop there. It also explains about approaches that use language modeling but does slightly more than ngrams to accommodate various types of languages (factored language models) and also talks about how a “one size fits all” approach won’t work. There is this gem of a statement:

“A truly language independent system works equally well across languages. When a system that is meant to be language independent does not in fact work equally well across languages, it is likely because something about the system design is making implicit assumptions about language structure. These assumptions are typically the result of “overfitting” to the original development language(s).”

Now, there is this section on language independence claims and representation of languages belonging to various families in the papers of ACL 2008. This concludes saying:
“Nonetheless, to the extent that language independence is an important goal, the field needs to improve both its testing of language independence and its sampling of languages to test against.”

Finally, the paper talks about one form of linguistic knowledge that can be incorporated in linguistic systems – linguistic typology and gives pointers to some useful resources and relevant research in this direction.

And I too conclude the post with the two main points that I hope people noticed in the research community:

(1) “This paper has briefly argued that the best way to create language-independent systems is to include linguistic knowledge, specifically knowledge about the ways in which languages vary in their structure. Only by doing so can we ensure that our systems are not overfitted to the development languages.”

(2) “Finally, if the field as a whole values language independence as a property of NLP systems, then we should ensure that the languages we select to use in evaluations are representative of both the language types and language families we are interested in.”

Good paper and considerable amount of food for thought! These are important design considerations, IMHO.

The extended epilogue:

At NAACL-2012, there was this tutorial titled “100 Things You Always Wanted to Know about Linguistics But Were Afraid to Ask“, by Emily Bender. At that time, although I in theory could have attended the conference, I could not, as I had to go to India. But, this was one tutorial that caught my attention with its name and description and I really wanted to attend it.

Thanks to a colleague who attended, I managed to see the slides of the tutorial (which I later saw on the professor’s website). Last week, during some random surfing, I realized that an elaborate version was released as a book:

Linguistic Fundamentals for Natural Language Processing: 100 Essentials from Morphology and Syntax
by Emily Bender
Pub: Synthesis Lectures on Human Language Technologies, Morgan and Claypool Publishers

I happily borrowed the book using the inter-library loan and it traveled for a few days and reached me from somewhere in Lower Saxony to here in Baden-Württemburg. Just imagine, it travelled all the way just for my sake! ;) :P

So, I started to go through the book. I, even in the days of absolute lack of any basic knowledge on this field, always felt that natural language processing should involve some form of linguistic modeling by default. However, most of the successful so-called “language independent” approaches (some of which also became the products we use regularly, like Google Translate and Transliterate) never speak about such linguistic modeling (atleast, not many that I read).

There is also this Norvig vs Chomsky debate, about which I keep getting reminded of when I think of this topic. (Neither of them are wrong in my view but that is not the point here.)

In this context, I found the paper particularly worth sharing. Anyway, I perhaps should end the post. While reading the introductory parts of Emily Bender’s book, I found a reference to the paper, and this blog post came out of that reading experience.

Published in: on January 23, 2014 at 5:04 pm  Comments (2)  
Tags:

Antonius Block’s questions

On an nth revisit of “The Seventh Seal” film script, I was again rereading the same questions…visualizing the same scene in the movie. This is where Antonius Block asks the entity in the confession box (not knowing that it is “death”), about God. Those haunting questions …
****

“Call it whatever you like. Is it so cruelly inconceivable to grasp God with the senses? Why should He hide himself in a mist of half-spoken promises and unseen miracles?

How can we have faith in those who believe when we can’t have faith in ourselves? What is going to happen to those of us who want to believe but aren’t able to? And what is to become of those who neither want nor are capable of believing?

Why can’t I kill the God within me? Why does He live on in this painful and humiliating way even though I curse Him and want to tear Him out of my heart? Why, in spite of everything, is He a baffling reality that I can’t shake off?”

******

Published in: on January 4, 2014 at 7:18 pm  Leave a Comment  

The “Ikiru” revist post

A few weeks back, Nagini Kandala posted on pustakam.net about Leo Tolstoy’s “The Death of Ivan Ilyich”. I felt the story looked so similar to Akira Kurosawa’s 1952 film “Ikiru” and came to know that the movie was actually motivated by the film. Now, I still did not read Tolstoy’s novel but my thoughts focused on Ikiru.

Thanks to the wonderful Inter Library Loan scheme here, a couple of days ago, I got a criterion collection DVD of Ikiru with a bonus documentary on Kurosawa and several other perks. I first watched the movie more than five years ago (here is a small article I wrote on the movie at Navatarangam.com) and so, I wondered if it will seem any different to me now.

(FYI: I realized recently that my thoughts on what I liked about Rashomon changed significantly from my first watch.)

For now, this small post is just some random notes on the movie and its accompanying commentaries on the DVD set.

Ikiru – movie

“Over the years I have seen Ikiru every five years or so, and each time it has moved me, and made me think. And the older I get, the less Watanabe seems like a pathetic old man, and the more he seems like every one of us.”
-Roger Ebert, the famous film critic said about this movie.

When I finished watching the movie, although I did not know Ebert’s words, I felt exactly the same way… that I am finding the old man Watanabe less irritating and more closer to life.

Now, I think I can say that this is one of the best movies I watched (Okay, I did not watch most of the “must watch” movies in those 10s and 100s of movie lists yet).

To know more on Ikiru, visit its wiki page.

As much as I want to write more here, for now, I don’t want to. May be some other time.

Criterion collection – Comments
Apart from the movie itself, the first DVD contained another version of the movie with some comments from Criterion Collection folks. The commentary too ran for almost as long as the movie. I like the idea and I really enjoyed the commentary to a large extent and listened to it without skipping any part. (So I ended up watching the movie again!)

Those comments on certain details I failed to notice when I watched the movie (e.g., comments on dressing style, or the mannerisms etc.,) and the trivia shared were certainly interesting. However, there were also moments where I felt that it was an overkill. I wondered if so much of analysis and spoon feeding is really necessary. Also, despite the apparent knowledge of the commentator, and the depth of this analysis, eventually, I was left with a feeling – “After all, all this commentary is just his interpretation of the movie”.

(Disclaimer: Okay, all you film critics, film students etc- don’t blast me. I would like think freely at least after initial guidance. I don’t like these spoon feeding kind of commentaries and its just a personal preference. I won’t respond to spiteful comments).

Anyway, I would think that the idea to add a commented version is great and it needs to be used at our own discretion.

Documentary on Kurosawa’s movies:

The best part of the second DVD in this set is listening to Kurosawa speaking about his movies. When I read his autobiography, the only thing that disappointed me was the fact that he stopped the story just before the international release of “Rashomon”. Since all his movies I saw were those that came after it, I was naturally curious to read his stories on those movies. The current documentary filled that void by not only making him talk about his various movies, but also by interspersing his comments with those of people who worked with him and with video clippings of the shoots.

For aspiring film-makers, these documentaries provide interesting and useful tips. For general film viewers, these documentaries are very interesting and informative. Who does not want to have a sneak-peak into the film production life of their favourite director? This interview is the one that could be revisited again and again. I would perhaps rent this DVD again after a few months/years.

There ends the story of how a dull early-autumn weekend was made colorful, thanks to this DVD! :-)

Published in: on October 14, 2013 at 6:22 am  Leave a Comment  

Sadgati

(I found this short note in my drafts folder, written in May 2013)
******

A few days ago, I ended up watching “Sadgati“. It is a 1981 Hindi film by Satyajit Ray. I wonder if such a short-duration film should actually be called a short-film but that is not the point. Sadgati is based on a story by Munshi Premchand and is a (rather silent) commentary on caste system. I say silent – because this is more of a depiction/narration than a real commentary. No one tries to take stances. No one tries to preach us. Yet, the intended message reaches us through the impact the narrative creates.

What I liked also was the fact that the movie ran for less than an hour. Although I think it could have been even shorter (with zero knowledge of movie making), I think this is an ideal time frame to make a movie out of a short-story and create a strong impact. My favourite Telugu directors would have made it more spicy with songs, fights and with 2.5 hour duration but that is a different story anyway :-).

The lead actors – Om Puri and Mohan Agashe were brilliant. Smitha Patil had a rather small role but, I continue to be amazed by her for her mature portrayal of such roles despite her actual age when she played all these roles. In all, this is a short but strong movie which will “haunt” you as one of the online reviews I read said.

Published in: on September 16, 2013 at 1:54 pm  Comments (1)  
Follow

Get every new post delivered to your Inbox.

Join 103 other followers