/* ---- Google Analytics Code Below */

Monday, July 25, 2022

Detecting, interpreting Irony

 I remember some very early work where we considered the detection of irony in consumer reactions.   Aka 'opinion minng', often including irony, often linked to humor.  

Irony machine: Why are AI researchers teaching computers to recognize irony?

by Charles Barbour, The Conversation   in TechExplore

What was your first reaction when you heard about Blake Lemoine, the Google engineer who announced last month the AI program he was working on had developed consciousness?

If, like me, you're instinctively suspicious, it might have been something like: Is this guy serious? Does he honestly believe what he is saying? Or is this an elaborate hoax?  Put the answers to those questions to one side. Focus instead on the questions themselves. Is it not true that even to ask them is to presuppose something crucial about Blake Lemoine: specifically, he is conscious?

In other words, we can all imagine Blake Lemoine being deceptive.  And we can do so because we assume there is a difference between his inward convictions—what he genuinely believes—and his outward expressions: what he claims to believe.

Isn't that difference the mark of consciousness? Would we ever assume the same about a computer?

Consciousness: 'The hard problem'

It is not for nothing philosophers have taken to calling consciousness "the hard problem." It is notoriously difficult to define.

But for the moment, let's say a conscious being is one capable of having a thought and not divulging it.

This means consciousness would be the prerequisite for irony, or saying one thing while meaning the opposite. I know you are being ironic when I realize your words don't correspond with your thoughts.

That most of us have this capacity—and most of us routinely convey our unspoken meanings in this manner—is something that, I think, should surprise us more often than it does.

It seems almost discretely human.  Animals can certainly be funny—but not deliberately so.   What about machines? Can they deceive? Can they keep secrets? Can they be ironic?

AI and irony

It is a truth universally acknowledged (among academics at least) that any research question you might cook up with the letters "AI" in it is already being studied somewhere by an army of obscenely well-resourced computational scientists—often, if not always, funded by the U.S. military.

This is certainly the case with the question of AI and irony, which has recently attracted a significant amount of research interest.

Of course, given that irony involves saying one thing while meaning the opposite, creating a machine that can detect it, let alone generate it, is no simple task.

But if we could create such a machine, it would have a multitude of practical applications, some more sinister than others.  In the age of online reviews, for example, retailers have become very keen on so-called "opinion mining" and "sentiment analysis," which uses AI to map not merely the content, but the mood of reviewer's comments.

Knowing whether your product is being praised or becoming the butt of the joke is valuable information.   Or consider content moderation on social media. If we want to limit online abuse while protecting freedom of speech, would it not be helpful to know when someone is serious and when they are joking?

Or what if someone tweets that they have just joined their local terrorist cell or they're packing a bomb in their suitcase and heading for the airport? (Don't ever tweet that, by the way.) Imagine if we could determine instantly whether they are serious, or whether they are just "being ironic."  ....' 

No comments: