Menu

Tag

Posts tagged with ‘opensource’

How to remove the signature from e-mails with NLP?

kinow @ Jun 14, 2017 13:59:33

Some time ago I stumbled across EmailParser, a Python utility to remove e-mail signatures. Here’s a sample input e-mail from the project documentation.

Wendy – thanks for the intro! Moving you to bcc.

Hi Vincent – nice to meet you over email. Apologize for the late reply, I was on PTO for a couple weeks and this is my first week back in office. As Wendy mentioned, I am leading an AR/VR taskforce at Foobar Retail Solutions. The goal of the taskforce is to better understand how AR/VR can apply to retail/commerce and if/what is the role of a shopping center in AR/VR applications for retail.

Wendy mentioned that you would be a great person to speak to since you are close to what is going on in this space. Would love to set up some time to chat via phone next week. What does your availability look like on Monday or Wednesday?

Best,
Joe Smith

Joe Smith | Strategy & Business Development
111 Market St. Suite 111| San Francisco, CA 94103
M: 111.111.1111| joe@foobar.com

And here’s what it looks like afterwards.

Wendy – thanks for the intro! Moving you to bcc.

Hi Vincent – nice to meet you over email. Apologize for the late reply, I was on PTO for a couple weeks and this is my first week back in office. As Wendy mentioned, I am leading an AR/VR taskforce at Foobar Retail Solutions. The goal of the taskforce is to better understand how AR/VR can apply to retail/commerce and if/what is the role of a shopping center in AR/VR applications for retail.

Wendy mentioned that you would be a great person to speak to since you are close to what is going on in this space. Would love to set up some time to chat via phone next week. What does your availability look like on Monday or Wednesday?

As you can see, it removed all the lines after the main part of the message (i.e. after the three paragraphs). Here’s what the Python code looks like.

>>> from Parser import read_email, strip, prob_block
>>> from spacy.en import English

>>> pos = English()  # part-of-speech tagger
>>> msg_raw = read_email('emails/test1.txt')
>>> msg_stripped = strip(msg_raw)  # preprocessing text before POS tagging

# iterate through lines, write to file if not signature block
>>> generate_text(msg_stripped, .9, pos_tagger, 'emails/test1_clean.txt')

What got me interested about this utility was the use of NLP. I couldn’t imagine how someone could use NLP for that. And I liked the simplicity of the approach, which is not perfect, but can be useful someday.

After the imports in the code, it creates a Part of Speech tagger using spaCy NLP library, reads the e-mail from a file, and sripts and creates an array with each paragraph of the message.

The magic happens in the generate_text function, which receives the array of paragraphs, a threshold, the POS tagger, and the output destination. Here’s what the function does.

for each message
    if probability ( signature block | message ) < threshold
        write to output file

And the formula for calculating the probability is quite simple too.

1. For a given paragraph (message block), find all the sentences in it.
2. Then for each word (token) in the sentence, count the number of times a non-verb appears.
3. Return the proportion of non-verbs per sentence, i.e. number of non-verbs / number of sentences.

In summary, it discards blocks that do not contain enough verbs to be considered a message block, being treated as signature blocks instead.

Never thought about using an approach like this. It may definitely be helpful when doing data analysis, information retrieval, or scraping data from the web. Not necessarily with e-mails and signatures, but you got the gist of it.

♥ Open Source

Backward compatibility and switch statement with constant expressions

kinow @ Jun 10, 2017 17:35:39

Maintaining Open Source software can be challenging. Making sure you keep backward compatibility (not only binary) can be even more challenging. Apache Commons Lang 3.6 release is happening right now thanks to Benedikt Ritter, and it is on its fourth Release Candidate (i.e. RC4).

A previous RC2 was cancelled due to IBM JDK 8 compatibility, more specifically the lazy initialization of ArrayList’s seems to be different in Oracle JDK and IBM JDK.

The RC3 was cancelled due to a change that could affect users using a switch statement.

The change was in the CharEncoding class. The issue was that this class has some constants, that stopped being constant expressions.

A constant expression is an expression denoting a value of primitive type or a String that does not complete abruptly and is composed using only the following (…)
public class CharEncoding {
    // ...
    public static final String ISO_8859_1 = "ISO-8859-1";
    // ...
}

The code above contains a constant (static final) variable, that is a constant expression. So users can safely use it in switch statements, and the Java compiler won’t complain about it.

public class CharEncoding {
    // ...
    public static final String ISO_8859_1 = StandardCharsets.ISO_8859_1.name();
    // ...
}

The code above is from the change that caused the regression. Any user that was using the ISO_8859_1 constant variable in a switch statement would get a compilation error (e.g. case expressions must be constant expressions) when updating to Apache Commons Lang 3.6. That is because the constant variable is not a constant expression.

I think I learned that some time ago, but if you asked me what was wrong with the change, and if it would break backward compatibility, I would probably fail to spot the issue. There are tools for that now (e.g. Clirr, japicmp) though they may miss some cases too.

Luckily in this case a user subscribed to the Apache Commons development mailing list spotted the issue and quickly reported it. It gets easier maintaining an Open Source project with the constructive feedback of users, like this one.

♥ Open Source

Apache Commons Text LookupTranslator

kinow @ Jun 02, 2017 22:50:39

Apache Commons Text includes several algorithms for text processing. Today’s post is about one of the classes available since the 1.0 release, the LookupTranslator.

It is used to translate text using a lookup table. Most users won’t necessarily be - knowingly - using this class. Most likely, they will use the StringEscapeUtils, which contains methods to escape and unescape CSV, JSON, XML, Java, and EcmaScript.

String original = "He didn't say, \"stop!\"";
String expected = "He didn't say, \\\"stop!\\\"";
String result   = StringEscapeUtils.escapeJava(original);

StringEscapeUtils uses CharSequenceTranslator’s, including LookupTranslator. You can use it directly too, to escape other data your text may contain, special characters not supported by some third party library or system, or even a simpler case.

In other words, you would be creating your own StringEscapeUtils. Let’s say you have some text where numbers must never start with the zero digital, due to some restriction in the way you use that data later.

Map<String, String> lookupTable = new HashMap<>();
lookupTable.put("a", "");
final LookupTranslator escapeNumber0 = new LookupTranslator(new String[][] { {"0", ""} });
String escaped = escapeNumber0.translate("There are 02 texts waiting for analysis today...");

That way the resulting text would be “There are 2 texts waiting for analysis today”, allowing you to proceed with the rest of your analysis. This is a very simple example, but hopefully you grokked how LookupTranslator works.

♥ Open Source

Some links related to Apache Commons Text

kinow @ May 28, 2017 19:50:39

Apache Commons Text is one of the most recent new components in Apache Commons. It “is a library focused on algorithms working on strings”. I recently collected some links under a bookmark folder that are in some way related to the project. In case you are interested, check some of the links below.

  • Morgan Wahl Text is More Complicated Than You Think Comparing and Sorting Unicode PyCon 2017
    • Q: test [text] to check if our methods are OK with some examples in this talk)
    • Q: Canonical Decomposition, and code points comparisons; are we doing it? Are we doing it right?
    • Q: Do we have casefolding?
    • Q: Do we have multi-level sort?
    • Q: CLDR
  • Łukasz Langa Unicode what is the big deal PyCon 2017
    • Q: Quite sure we have an issue to guess the encoding for a text…. there is a GPL library for that? Under Mozilla perhaps?
  • Jiaqi Liu Fuzzy Search Algorithms How and When to Use Them PyCon 2017
    • Q: Does OpenNLP have N-GRAM’s? Would it make sense to have that in [text]?
    • Q: Where can we find some tokenizers? OpenNLP?
  • Lothaire’s Books like “Combinatorics on Words” and “Algebraic Combinatorics”.
  • Java tutorial lesson “Working with Text”
  • Mitzi Morris’ Text Processing in Java book
  • StringSearch java library.
    • “The Java language lacks fast string searching algorithms. StringSearch provides implementations of the Boyer-Moore and the Shift-Or (bit-parallel) algorithms. These algorithms are easily five to ten times faster than the naïve implementation found in java.lang.String”.
  • Jakarta Oro (attic)
    • The Jakarta-ORO Java classes are a set of text-processing Java classes that provide Perl5 compatible regular expressions, AWK-like regular expressions, glob expressions, and utility classes for performing substitutions, splits, filtering filenames, etc. This library is the successor to the OROMatcher, AwkTools, PerlTools, and TextTools libraries originally from ORO, Inc. Despite little activity in the form of new development initiatives, issue reports, questions, and suggestions are responded to quickly.
    • Discontinued, but is there anything useful in there? The attic has always interesting things after all…
  • TextProcessing blog - A Text Processing Portal for Humans
  • twitter-text, the Twitter Java (multi-language actually…) text processing library.
  • Python’s text modules

♥ Open Source

When you don’t realize you need a Comparable

kinow @ May 15, 2017 23:07:39

In 2012, I wrote about how you always learn something new by following the Apache dev mailing lists.

After about five years, I am still learning, and still getting impressed by the knowledge of other developers. Days ago I was massaging some code in a pull request and a developer suggested me to simplify my code.

The suggestion was to make a class a Comparable type to both simplify the code, and also have a better design. I immediately agreed, and looking back in hindsight, it was the most logical choice. Yet, I simply did not think about that.

// What the code was
case VSPACE_SORTKEY :
    int cmp = 0;
    String c1 = nv1.getCollation();
    String c2 = nv2.getCollation();
    if (c1 != null && c2 != null && c1.equals(c2)) {
        // locales are parsed. Here we could think about caching if necessary
        Locale desiredLocale = Locale.forLanguageTag(c1);
        Collator collator = Collator.getInstance(desiredLocale);
        cmp = collator.compare(nv1.getString(), nv2.getString());
    } else {
        cmp = XSDFuncOp.compareString(nv1, nv2) ;
    }
    return cmp;
}
// What the code is now
case VSPACE_SORTKEY :
    return ((NodeValueSortKey) nv1).compareTo((NodeValueSortKey) nv2);
}

This moved the logic to a method in the NodeValueSortKey class. This reduced the complexity of the class with the switch statement. And it also made it easier to write unit tests.

If you are not involved in Open Source projects yet, I keep my suggestion from five years ago. Find a project related to something you like, and start reading the code, lurk in the mailing list or watch GitHub repositories.

You can always learn more!

♥ Open Source