On Openmindedness

On an impulse, I started looking at the issues of a journal called Educational Researcher. I just started looking (just looking) at all the titles of all articles since 1972. One of the titles I found was: “On the Nature of Educational Research” and these were the concluding remarks from that article.

“Openmindedness is not empty mindedness, however, and it is not tolerance of all views good or bad. It is having a sincere concern for truth and a willingness to consider, test, argue and revise on the basis of evidence our own and others’ claims in a reasonable and fair manner (Hare, 1979). This doesn’t mean that we will always reach agreement, or even that we will always be able to understand and appreciate the arguments of others, or that we cannot be committed to a position of our own. Openmindedness only requires a sincere attempt to consider the merits of other views and their claims. It does not release us from exercising judgement.”

From: “On the Nature of Educational Research” by Jonas F.Soltis. Educational Researcher. 1984. 13 (5)
If anyone has access, it could be read here.

The Hare, 1979 referred in this quote is this.

I wonder if the quote is only valid for that context of education!

Published in: on April 15, 2014 at 1:03 pm  Leave a Comment  

Significant peace

Now, the amount of mental peace I felt after reading this (even if it is just for a few moments), makes it inevitable that I should drop a line or two about it in my blog :-) Even if its momentary, I don’t consider the peace as random or arbitrary. I consider it significant ;-).

The questions on the use of statistical significance for large datasets have been bugging me for sometime now although I never really did anything about it. The questions only kept getting back more and more frequently. Especially each time a reviewer asked about significance tests, I wondered – “Won’t everything become significantly different if you have a large N?”. As the perennial fledgling researcher, although, my first instinct is to doubt my own understanding of the process.

I came across this piece “Language is never, ever, ever, random” by Adam Kilgariff, which brought me some mental peace in what is (in my imagination) one of the very confusing phases of my life at the moment :-)

Here are the details of the paper:
Language is never, ever, ever, random
by Adam Kilgariff
Corpus Linguistics and Linguistic Theory 1-2 (2005), 263-276

The abstract:
“Language users never choose words randomly, and language is essentially non-random. Statistical hypothesis testing uses a null hypothesis, which posits randomness. Hence, when we look at linguistic phenomena in corpora, the null hypothesis will never be true. Moreover, where there is enough data, we shall (almost) always be able to establish that it is not true. In
corpus studies, we frequently do have enough data, so the fact that a relation between two phenomena is demonstrably non-random, does not support the inference that it is not arbitrary. We present experimental evidence of how arbitrary associations between word frequencies and corpora are systematically non-random. We review literature in which hypothesis testing has been used, and show how it has often led to unhelpful or misleading results.”

And the take home message (acc. to me):
Hypothesis testing has been used to reach conclusions, where the difficulty in reaching a conclusion is caused by sparsity of data. But language data, in this age of information glut, is available in vast quantities. A better strategy will generally be to use more data Then the difference between the motivated and the arbitrary will be evident without the use of compromised hypothesis testing. As Lord Rutherford put it: “If your experiment needs statistics, you ought to have done a better experiment.”

Published in: on March 4, 2014 at 11:50 am  Leave a Comment  

The Stronger – August Strindberg

Persona” was the first Ingmar Bergman movie I watched, in mid-2008 or so. Since then, I watched a couple of his movies, read some of his writings, reached Strindberg from him in the past few years. However, “Persona” remained the most intriguing movie, although its not my favorite Bergman movie. Although I don’t think I understand the movie, it was the one that raised my curiosity about Bergman as a writer and set me on the path of watching his other movies. While listening to the lectures on Bergman in Scandinavian Film and Television course on coursera, I learnt that Strindberg’s one-act play, “The Stronger” was an inspiration for “Persona”.

[The word "inspiration" is very different from "copy". Both the play and the movie are independent entities and are equally worth checking out. I personally would consider Persona to be a much more complex psychological drama and its much longer.]

Now, “The Stronger” did not particularly fascinate me. But it is hard to not think about the characters and about their possible interpretations, after reading the play. Its short, very short, but has its impact on the reader nevertheless. I will not say anything more, but will quote something that I read again and again in the play (No, not because I don’t understand English – but because the characters came alive in front of my eyes when I read the monologue).

“Everything, everything came from you to me, even your passions. Your soul crept into mine, like a worm into an apple, ate and ate, bored and bored, until nothing was left but the rind and a little black dust within. I wanted to get away from you, but I couldn’t; you lay like a snake and charmed me with your black eyes; I felt that when I lifted my wings they only dragged me down; I lay in the water with bound feet, and the stronger I strove to keep up the deeper I worked myself down, down, until I sank to the bottom, where you lay like a giant crab to clutch me in your claws–and there I am lying now.

I hate you, hate you, hate you! And you only sit there silent–silent and indifferent; indifferent whether it’s new moon or waning moon, Christmas or New Year’s, whether others are happy or unhappy; without power to hate or to love; as quiet as a stork by a rat hole–you couldn’t scent your prey and capture it, but you could lie in wait for it! “

Here is an interesting analysis of the play.

A few months back, I bought “Persona”‘s screenplay and found a pdf of critical essays on Persona. Perhaps, its time to start reading them soon! :-)

Published in: on February 23, 2014 at 1:16 pm  Leave a Comment  

Questions to Mother Nature

I wrote this in May 2013, wondering how long will those cold days last. Looks like its time to complain about not having snow this time! I wonder if there will be a time when I won’t complain! ;)

****

Mother nature, mother nature,
May I be so bold -
and call you cold
for making our May so cold?

I knew you had a heart of gold
You forgave us since time old
Now, is your temper losing its hold?
is that what is being told?

Or is this just your way to scold
your problem children, the man-fold?
I know, your fury was foretold
and perhaps, we should never be cajold

But, Mother nature, mother nature
it hurts, this cold
have some mercy rolled
and please, let some warmth be doled!

Published in: on February 2, 2014 at 1:10 pm  Comments (4)  

“Linguistically Naive != Language Independent” and my soliloquy

This post is about a paper that I read today (which inspired me to write a real blog post after months!)

The paper: Linguistically Naive!= Language Independent: Why NLP Needs Linguistic Typology
Author: Emily Bender
Proceedings of the EACL 2009 Workshop on the Interaction between Linguistics and Computational Linguistics, pages 26–32. ACL.

In short, this is a position paper, that argues that incorporating linguistic knowledge is a must if we want to create truly language independent NLP systems. Now, on the surface, that looks like a contradictory statement. Well, it isn’t ..and it is common sense, in.. er..some sense ;)

So, time for some background: an NLP algorithm that offers a solution to some problem is called language independent if that approach can work for any other language apart from the language for which it was initially developed. One common example can be Google Translate. It is a practical example of how an approach can work across multiple language pairs (with varying efficiencies ofcourse, but that is different). The point of these language independent approaches is that, in theory, you can just apply the algorithm on any language as long as you have the relevant data about that language. However, typically, such approaches in contemporary research eliminate any linguistic knowledge in their modeling and there by make it “language” independent.

Now, what the paper argues for is clear from the title – “linguistically naive != language independent”.

I liked the point made in section-2, where in some cases, the surface appearance of language independence is actually a hidden language dependence. The specific example of ngrams and how efficiently they work, albeit for languages with certain kind of properties, and the claim of language independence – that nailed down the point. Over a period of time, I became averse to the idea of using n-grams for each and every problem, as I thought this is not giving any useful insights neither from a linguistic nor from a computational perspective (This is my personal opinion). However, although I did think of this language dependent aspect of n-grams, I never clearly put it this way and I just accepted that “language independence” claim. Now, this paper changed that acceptance. :-)

One good thing about this paper is that it does not stop there. It also explains about approaches that use language modeling but does slightly more than ngrams to accommodate various types of languages (factored language models) and also talks about how a “one size fits all” approach won’t work. There is this gem of a statement:

“A truly language independent system works equally well across languages. When a system that is meant to be language independent does not in fact work equally well across languages, it is likely because something about the system design is making implicit assumptions about language structure. These assumptions are typically the result of “overfitting” to the original development language(s).”

Now, there is this section on language independence claims and representation of languages belonging to various families in the papers of ACL 2008. This concludes saying:
“Nonetheless, to the extent that language independence is an important goal, the field needs to improve both its testing of language independence and its sampling of languages to test against.”

Finally, the paper talks about one form of linguistic knowledge that can be incorporated in linguistic systems – linguistic typology and gives pointers to some useful resources and relevant research in this direction.

And I too conclude the post with the two main points that I hope people noticed in the research community:

(1) “This paper has briefly argued that the best way to create language-independent systems is to include linguistic knowledge, specifically knowledge about the ways in which languages vary in their structure. Only by doing so can we ensure that our systems are not overfitted to the development languages.”

(2) “Finally, if the field as a whole values language independence as a property of NLP systems, then we should ensure that the languages we select to use in evaluations are representative of both the language types and language families we are interested in.”

Good paper and considerable amount of food for thought! These are important design considerations, IMHO.

The extended epilogue:

At NAACL-2012, there was this tutorial titled “100 Things You Always Wanted to Know about Linguistics But Were Afraid to Ask“, by Emily Bender. At that time, although I in theory could have attended the conference, I could not, as I had to go to India. But, this was one tutorial that caught my attention with its name and description and I really wanted to attend it.

Thanks to a colleague who attended, I managed to see the slides of the tutorial (which I later saw on the professor’s website). Last week, during some random surfing, I realized that an elaborate version was released as a book:

Linguistic Fundamentals for Natural Language Processing: 100 Essentials from Morphology and Syntax
by Emily Bender
Pub: Synthesis Lectures on Human Language Technologies, Morgan and Claypool Publishers

I happily borrowed the book using the inter-library loan and it traveled for a few days and reached me from somewhere in Lower Saxony to here in Baden-Württemburg. Just imagine, it travelled all the way just for my sake! ;) :P

So, I started to go through the book. I, even in the days of absolute lack of any basic knowledge on this field, always felt that natural language processing should involve some form of linguistic modeling by default. However, most of the successful so-called “language independent” approaches (some of which also became the products we use regularly, like Google Translate and Transliterate) never speak about such linguistic modeling (atleast, not many that I read).

There is also this Norvig vs Chomsky debate, about which I keep getting reminded of when I think of this topic. (Neither of them are wrong in my view but that is not the point here.)

In this context, I found the paper particularly worth sharing. Anyway, I perhaps should end the post. While reading the introductory parts of Emily Bender’s book, I found a reference to the paper, and this blog post came out of that reading experience.

Published in: on January 23, 2014 at 5:04 pm  Comments (2)  
Tags:

Antonius Block’s questions

On an nth revisit of “The Seventh Seal” film script, I was again rereading the same questions…visualizing the same scene in the movie. This is where Antonius Block asks the entity in the confession box (not knowing that it is “death”), about God. Those haunting questions …
****

“Call it whatever you like. Is it so cruelly inconceivable to grasp God with the senses? Why should He hide himself in a mist of half-spoken promises and unseen miracles?

How can we have faith in those who believe when we can’t have faith in ourselves? What is going to happen to those of us who want to believe but aren’t able to? And what is to become of those who neither want nor are capable of believing?

Why can’t I kill the God within me? Why does He live on in this painful and humiliating way even though I curse Him and want to tear Him out of my heart? Why, in spite of everything, is He a baffling reality that I can’t shake off?”

******

Published in: on January 4, 2014 at 7:18 pm  Leave a Comment  

The “Ikiru” revist post

A few weeks back, Nagini Kandala posted on pustakam.net about Leo Tolstoy’s “The Death of Ivan Ilyich”. I felt the story looked so similar to Akira Kurosawa’s 1952 film “Ikiru” and came to know that the movie was actually motivated by the film. Now, I still did not read Tolstoy’s novel but my thoughts focused on Ikiru.

Thanks to the wonderful Inter Library Loan scheme here, a couple of days ago, I got a criterion collection DVD of Ikiru with a bonus documentary on Kurosawa and several other perks. I first watched the movie more than five years ago (here is a small article I wrote on the movie at Navatarangam.com) and so, I wondered if it will seem any different to me now.

(FYI: I realized recently that my thoughts on what I liked about Rashomon changed significantly from my first watch.)

For now, this small post is just some random notes on the movie and its accompanying commentaries on the DVD set.

Ikiru – movie

“Over the years I have seen Ikiru every five years or so, and each time it has moved me, and made me think. And the older I get, the less Watanabe seems like a pathetic old man, and the more he seems like every one of us.”
-Roger Ebert, the famous film critic said about this movie.

When I finished watching the movie, although I did not know Ebert’s words, I felt exactly the same way… that I am finding the old man Watanabe less irritating and more closer to life.

Now, I think I can say that this is one of the best movies I watched (Okay, I did not watch most of the “must watch” movies in those 10s and 100s of movie lists yet).

To know more on Ikiru, visit its wiki page.

As much as I want to write more here, for now, I don’t want to. May be some other time.

Criterion collection – Comments
Apart from the movie itself, the first DVD contained another version of the movie with some comments from Criterion Collection folks. The commentary too ran for almost as long as the movie. I like the idea and I really enjoyed the commentary to a large extent and listened to it without skipping any part. (So I ended up watching the movie again!)

Those comments on certain details I failed to notice when I watched the movie (e.g., comments on dressing style, or the mannerisms etc.,) and the trivia shared were certainly interesting. However, there were also moments where I felt that it was an overkill. I wondered if so much of analysis and spoon feeding is really necessary. Also, despite the apparent knowledge of the commentator, and the depth of this analysis, eventually, I was left with a feeling – “After all, all this commentary is just his interpretation of the movie”.

(Disclaimer: Okay, all you film critics, film students etc- don’t blast me. I would like think freely at least after initial guidance. I don’t like these spoon feeding kind of commentaries and its just a personal preference. I won’t respond to spiteful comments).

Anyway, I would think that the idea to add a commented version is great and it needs to be used at our own discretion.

Documentary on Kurosawa’s movies:

The best part of the second DVD in this set is listening to Kurosawa speaking about his movies. When I read his autobiography, the only thing that disappointed me was the fact that he stopped the story just before the international release of “Rashomon”. Since all his movies I saw were those that came after it, I was naturally curious to read his stories on those movies. The current documentary filled that void by not only making him talk about his various movies, but also by interspersing his comments with those of people who worked with him and with video clippings of the shoots.

For aspiring film-makers, these documentaries provide interesting and useful tips. For general film viewers, these documentaries are very interesting and informative. Who does not want to have a sneak-peak into the film production life of their favourite director? This interview is the one that could be revisited again and again. I would perhaps rent this DVD again after a few months/years.

There ends the story of how a dull early-autumn weekend was made colorful, thanks to this DVD! :-)

Published in: on October 14, 2013 at 6:22 am  Leave a Comment  

Sadgati

(I found this short note in my drafts folder, written in May 2013)
******

A few days ago, I ended up watching “Sadgati“. It is a 1981 Hindi film by Satyajit Ray. I wonder if such a short-duration film should actually be called a short-film but that is not the point. Sadgati is based on a story by Munshi Premchand and is a (rather silent) commentary on caste system. I say silent – because this is more of a depiction/narration than a real commentary. No one tries to take stances. No one tries to preach us. Yet, the intended message reaches us through the impact the narrative creates.

What I liked also was the fact that the movie ran for less than an hour. Although I think it could have been even shorter (with zero knowledge of movie making), I think this is an ideal time frame to make a movie out of a short-story and create a strong impact. My favourite Telugu directors would have made it more spicy with songs, fights and with 2.5 hour duration but that is a different story anyway :-).

The lead actors – Om Puri and Mohan Agashe were brilliant. Smitha Patil had a rather small role but, I continue to be amazed by her for her mature portrayal of such roles despite her actual age when she played all these roles. In all, this is a short but strong movie which will “haunt” you as one of the online reviews I read said.

Published in: on September 16, 2013 at 1:54 pm  Comments (1)  

MLSS 2013 – Week 1 recap

I am attending this year’s Machine Learning Summer School and we just finished one week of lectures. I thought now is the moment to look back and note down my thoughts (mainly because we thankfully don’t have lectures on sundays!). One more week to go and I am already very glad that I am here listening to all these amazing people who are undoubtedly some of the best researchers in this area. There is also a very vibrant and smart student community.

Until Saturday evening, my thoughts on the summer school focused more on the content of the sessions. They were mostly about the mathematics in the sessions, my comfort and discomfort with it, their relevance, understanding the conceptual basis of it etc., I won’t make claims that I understood everything. I understood some talks better, some talks not at all. I also understood that things could have been much better for me if we were informed about why we need to actually seriously follow all the Engineering Mathematics courses during my bachelors ;).

However, coming to the point, as I listened to the Multilayer Nets lecture by Leon Bottou on Saturday afternoon, there was something that I found particularly striking. It looks like two things that I always thought of as possibly interesting aspects of Machine Learning are not really a part of the real machine learning community. (Okay, one summer school is not a whole community but I did meet some people who have been in that field of research for years now).

1) What exactly are you giving as input for the machine to learn? Shouldn’t we give the machine proper input for it to learn what we expect it to learn?

2) Why isn’t the interpretability of a model an issue worth researching about?

Let me elaborate on these.

Coming to the first one, this is called “Feature Engineering”. The answer that I heard from one senior researcher for this question was: “We build algorithms that will enable the machine to learn from anything. Features are not our problem. The machine will figure that out.” But, won’t the machine need the right eco-system for that? If I grow up in a Telugu speaking household and get exposed to Telugu input all the time, will I be expected to learn Telugu or Chinese? Likewise, if we want to construct a model that does a specific task, is it not our responsibility to prepare the input for that? Okay, we can build systems that figure out the features that work by itself. But won’t that make the machine learn anything from the several possible problem subspaces, instead of the specific issue we want it to learn? Yes, there are always ways to assess if its learning the right thing. But, thats not the point. In a way, this connects again to the second question.

Am not knowledgeable enough on this field to come up with a well-argued response to that above comment by the senior researcher. The matter of fact is also that there is enough evidence that that approach does work in some scenarios. But, this is a general question on the applicability of the models, issues regarding domain adaptation if any etc. I found so less literature on theoretical aspects connecting feature engineering to algorithm design and hence these basic doubts.

The second question is also something that I have been thinking about for a long time now. Are people really not bothered about how those who apply Machine Learning in their fields interpret their models or am I bad at searching for the right things? Why is there no talk about the interpretability of models? I did find a small amount of literature on “Human comprehensible machine learning” and related research, but not much.

I am still in the process of thinking, reading and understanding more on this topic. I will perhaps write another detailed post soon (with whatever limited awareness I have on this topic). But, in the mean while,

* Here is a blogpost by a grad student, that has some valid points on interpretability of models.

* “Machine Learning that matters“, ICML 2012 position paper by Kiri Wagstaff. This is something that I keep getting back to time and again, whenever I get into thinking about these topics. Not that the paper answers my questions.. it keeps me motivated to think on them.

* An older blogpost on the above paper which had some good discussion in the comments section.

With these thoughts, we march towards the second week of awesomeness at MLSS 2013 :-).

Published in: on September 1, 2013 at 3:31 pm  Comments (1)  
Tags:

Notes from ACL

This is the kind of post that would not interest anyone else except me perhaps. I was at ACL (in a very interesting city called Sofia, the capital of Bulgaria) last week and I am still in the process of making some notes on the papers that interested me, abstracts that raised my curiosity, short and long term interest topics etc. I thought its probably a better idea to arrange atleast the titles in some subgroups and save somewhere so that it would be easy for me to get back later. I did not read all of them completely. Infact, for a few of them, I did not even go beyond the abstract. So, don’t ask me questions. Anyone who is interested in any of these titles can either read them by googling for them or visit the ACL anthology page for ACL’13 and find the pdfs there.

The first two sections below are my current topics of interest. The third one is a general topic of interest. The fourth one includes everything else…that piqued my interest. Fifth section is on teaching CL/NLP…which is also a long term interest topic for me. The final section is about workshops as a whole that I have interest in.

*****

Various dimensions of the notion of text difficulty, readability
* Automatically predicting sentence translation difficulty – Mishra and Bhattacharya
* Automatic detection of deception in child produced speech using syntactic complexity features – Yancheva and Rudzicz
* Simple, readable sub sentences – Klerke and Sogaard
* Improving text simplification language modeling using Unsimplified text data – Kauchak
* Typesetting for improved readability using lexical and syntactic information – Salama et.al.
* What makes writing great?: First experiments on Article quality prediction in the science journalism domain, Louis and Nenkova
* Word surprisal predicts N400 amplitude during reading – Frank et.al.
* An analysis of memory based processing costs using incremental deep syntactic dependency parsing – Schjindel et.al.

Language Learning, Assessment etc.
* Discriminative Approach to fill-in-the-blank quiz generation for language learners
* Modeling child divergences from Adult Grammar with Automatic Error Correction
* Automated collocation suggestion for Japanese second language learners
* Reconstructing an Indo-European family tree from non-native English texts
* Word association profiles and their use for automated scoring of essays -Klebanov and Flor.
* Grammatical error correction using Integer Linear programming
* A learner corpus based approach to verb suggestion for ESL
* Modeling thesis clarity in student essays – Persing & Ng
* Computerized analysis of a verbal fluency test – Szumlanski et.al.
* Exploring word class n-grams to measure language development in children. Ramirez-de-la-Rosa et.al.

NLP for other languages:
* Sorani Kurdish versus Kurmanji Kurdish: An Empirical Comparison – Esmaili and Salavati
* Identifying English and Hungarian light verb constructions: A contrastive approach – Vincze et.al
* Real-world semi-supervised learning of POS taggers for low-resource languages -Garrette et.al.
* Learning to lemmatize Polish noun phrases – Radziszewski
* Sentence level dialect identification in Arabic – Elfardy and Diab

Others:
* Exploring Word Order Universals: a probabilisitic graphical model approach – Xia Lu.
* An opensource toolkit for quantitative historical linguists
* SORT: An improved source rewriting tool for improved translation
* unsupervised consonant-vowel prediction over hundreds of languages
* Linguistic models for analyzing and detecting biased language.
* Earlier identification of Epilepsy surgery candidates using natural language processing – Matykiewicz et.al.
* Parallels between linguistics and biology. Chakraborti and Tendulkar
* Analysing lexical consistency in translation – Guillou
* Associative texture is lost in translation – Klebanov and Flor

Teaching CL, NLP:
* Artificial IntelliDance: Teaching Machine learning through choreography, Agarwal and Trainor
* Treebanking for data-driven research in the classroom, Lee et.al.
* Learning computational linguistics through NLP evaluation events: the experience of Russian evaluation initiative. Bonch-Osmolovskaya et.al.
* Teaching the basics of NLP and ML in an introductory course to Information Science. Agarwal.

whole workshops and competitions:
* Shared task on quality estimation in Machine translation
* Predicting and improving textual readability for target reader populations (PITR 2013)

Published in: on August 14, 2013 at 9:09 am  Leave a Comment  
Tags:
Follow

Get every new post delivered to your Inbox.

Join 86 other followers