In information theory and computer science, the Levenshtein distance is a string metric for measuring the difference between two sequences. Informally, the Levenshtein distance between two words is the minimum number of single-character edits (i.e. insertions, deletions or substitutions) required to change one word into the other. The phrase edit distance is often used to refer specifically to Levenshtein distance. It is named after Vladimir Levenshtein, who considered this distance in 1965. It is closely related to pairwise string alignments.

I. Implementation in R

R package: ‘stringdist’

Description: Calculate various string distances based on edits (damerau-levenshtein, hamming, levenshtein, optimal sting alignment),qgrams (q- gram, cosine, jaccard distance) or heuristic metrics (jaro,jaro-winkler). 

1. amatch Examples

    # which sci-fi heroes are stringdistantly nearest
    amatch("leia",c("uhura","leela"),maxDist=5)

    # we can restrict the search
   amatch("leia",c("uhura","leela"),maxDist=1)

    #setting nomatch returns a different value when no match is found
    amatch("leia",c("uhura","leela"),maxDist=1,nomatch=0)

    #this is always true if maxDist is Inf
    ain("leia",c("uhura","leela"),maxDist=Inf)

    # Let’s look in a neighbourhood of maximum 2 typo’s (by default, the OSA algorithm is used)
    ain("leia",c("uhura","leela"), maxDist=2)

2. qgrams Examples

qgrams(’hello world’,q=3)

# q-grams are counted uniquely over a character vector
qgrams(rep(’hello world’,2),q=3)
# to count them separately, do something like
    x <- c(’hello’, ’world’)
    lapply(x,qgrams, q=3)

# output rows may be named, and you can pass any number of character vectors
    x <- "I will not buy this record, it is scratched"
    y <- "My hovercraft is full of eels"
    z <- c("this", "is", "a", "dead","parrot")

    qgrams(A = x, B = y, C = z,q=2)

    # a tonque twister, showing the effects of useBytes and useNames
    x <- "peter piper picked a peck of pickled peppers"
    qgrams(x, q=2)
    qgrams(x, q=2, useNames=FALSE)

    qgrams(x, q=2, useBytes=TRUE)
    qgrams(x, q=2, useBytes=TRUE, useNames=TRUE)

3. stringdist Examples

  1. # Simple example using optimal string alignment
        stringdist("ca","abc")
    
        # The same example using Damerau-Levenshtein distance (multiple editing of substrings allowed)
        stringdist("ca","abc",method="dl")
    
        # string distance matching is case sensitive:
        stringdist("ABC","abc")
# so you may want to normalize a bit:    
stringdist(tolower("ABC"),"abc")

    # stringdist recycles the shortest argument:
    stringdist(c(’a’,’b’,’c’),c(’a’,’c’))

    # stringdistmatrix gives the distance matrix (by default for optimal string alignment):
    stringdist(c(’a’,’b’,’c’),c(’a’,’c’))

    # different edit operations may be weighted; e.g. weighted substitution:
    stringdist(’ab’,’ba’,weight=c(1,1,1,0.5))

    # Non-unit weights for insertion and deletion makes the distance metric asymetric
    stringdist(’ca’,’abc’)
    stringdist(’abc’,’ca’)
    stringdist(’ca’,’abc’,weight=c(0.5,1,1,1))
    stringdist(’abc’,’ca’,weight=c(0.5,1,1,1))

    # Hamming distance is undefined for
    # strings of unequal lengths so stringdist returns Inf
    stringdist("ab","abc",method="h")
    # For strings of eqal length it counts the number of unequal characters as they occur
    # in the strings from beginning to end
    stringdist("hello","HeLl0",method="h")

    # The lcm (longest common substring) distance returns the number of
    # characters that are not part of the lcs.
    #
    # Here, the lcs is either ’a’ or ’b’ and one character cannot be paired:
    stringdist(’ab’,’ba’,method="lcs")

    # Here the lcs is ’surey’ and ’v’, ’g’ and one ’r’ of ’surgery’ are not paired
    stringdist(’survey’,’surgery’,method="lcs")

    # q-grams are based on the difference between occurrences of q consecutive characters
    # in string a and string b.
    # Since each character abc occurs in ’abc’ and ’cba’, the q=1 distance equals 0:
    stringdist(’abc’,’cba’,method=’qgram’,q=1)

    # since the first string consists of ’ab’,’bc’ and the second
    # of ’cb’ and ’ba’, the q=2 distance equals 4 (they have no q=2 grams in common):
    stringdist(’abc’,’cba’,method=’qgram’,q=2)

    # Wikipedia has the following example of the Jaro-distance.
    stringdist(’MARTHA’,’MATHRA’,method=’jw’)
    # Note that stringdist gives a  _distance_ where wikipedia gives the corresponding
    # _similarity measure_. To get the wikipedia result:
    1 - stringdist(’MARTHA’,’MATHRA’,method=’jw’)

    # The corresponding Jaro-Winkler distance can be computed by setting p=0.1
    stringdist(’MARTHA’,’MATHRA’,method=’jw’,p=0.1)
    # or, as a similarity measure
    1 - stringdist(’MARTHA’,’MATHRA’,method=’jw’,p=0.1)

II. Wikipedia details:

1. Definition

Mathematically, the Levenshtein distance between two strings a, b is given by \operatorname{lev}_{a,b}(|a|,|b|) where

\qquad\operatorname{lev}_{a,b}(i,j) = \begin{cases}<br />
  \max(i,j) & \text{ if} \min(i,j)=0, \\<br />
  \min \begin{cases}<br />
          \operatorname{lev}_{a,b}(i-1,j) + 1 \\<br />
          \operatorname{lev}_{a,b}(i,j-1) + 1 \\<br />
          \operatorname{lev}_{a,b}(i-1,j-1) + 1_{(a_i \neq b_j)}<br />
       \end{cases} & \text{ otherwise.}<br />
\end{cases}

where 1_{(a_i \neq b_j)} is the indicator function equal to 0 when a_i = b_j and 1 otherwise.

Note that the first element in the minimum corresponds to deletion (from a to b), the second to insertion and the third to match or mismatch, depending on whether the respective symbols are the same.

Example

For example, the Levenshtein distance between “kitten” and “sitting” is 3, since the following three edits change one into the other, and there is no way to do it with fewer than three edits:

  1. kitten → sitten (substitution of “s” for “k”)
  2. sitten → sittin (substitution of “i” for “e”)
  3. sittin → sitting (insertion of “g” at the end).

Upper and lower bounds

The Levenshtein distance has several simple upper and lower bounds. These include:

  • It is always at least the difference of the sizes of the two strings.
  • It is at most the length of the longer string.
  • It is zero if and only if the strings are equal.
  • If the strings are the same size, the Hamming distance is an upper bound on the Levenshtein distance.
  • The Levenshtein distance between two strings is no greater than the sum of their Levenshtein distances from a third string (triangle inequality).

2. Applications

In approximate string matching, the objective is to find matches for short strings in many longer texts, in situations where a small number of differences is to be expected. The short strings could come from a dictionary, for instance. Here, one of the strings is typically short, while the other is arbitrarily long. This has a wide range of applications, for instance, spell checkers, correction systems for optical character recognition, and software to assist natural language translation based on translation memory.

The Levenshtein distance can also be computed between two longer strings, but the cost to compute it, which is roughly proportional to the product of the two string lengths, makes this impractical. Thus, when used to aid in fuzzy string searching in applications such as record linkage, the compared strings are usually short to help improve speed of comparisons.

3. Relationship with other edit distance metrics

There are other popular measures of edit distance, which are calculated using a different set of allowable edit operations. For instance,

Edit distance is usually defined as a parameterizable metric calculated with a specific set of allowed edit operations, and each operation is assigned a cost (possibly infinite). This is further generalized by DNA sequence alignment algorithms such as the Smith–Waterman algorithm, which make an operation’s cost depend on where it is applied.

4. Computing Levenshtein distance

Recursive

A straightforward recursive implementation LevenshteinDistance function takes two strings, s and t, and returns the Levenshtein distance between them:

// len_s and len_t are the number of characters in string s and t respectively
int LevenshteinDistance(string s, int len_s, string t, int len_t)
{
  /* base case: empty strings */
  if (len_s == 0) return len_t;
  if (len_t == 0) return len_s;

  /* test if last characters of the strings match */
  if (s[len_s-1] == t[len_t-1]) cost = 0;
  else                          cost = 1;

  /* return minimum of delete char from s, delete char from t, and delete char from both */
  return minimum(LevenshteinDistance(s, len_s - 1, t, len_t    ) + 1,
                 LevenshteinDistance(s, len_s    , t, len_t - 1) + 1,
                 LevenshteinDistance(s, len_s - 1, t, len_t - 1) + cost);
}

Unfortunately, the straightforward recursive implementation is very inefficient because it recomputes the Levenshtein distance of the same substrings many times. A better method would never repeat the same distance calculation. For example, the Levenshtein distance of all possible prefixes might be stored in an array d[][] where d[i][j] is the distance between the first i characters of string s and the first j characters of string t. The table is easy to construct one row at a time starting with row 0. When the entire table has been built, the desired distance is d[len_s][len_t]. While this technique is significantly faster, it will consume len_s * len_t more memory than the straightforward recursive implementation.

Iterative with full matrix

Computing the Levenshtein distance is based on the observation that if we reserve a matrix to hold the Levenshtein distances between all prefixes of the first string and all prefixes of the second, then we can compute the values in the matrix in a dynamic programming fashion, and thus find the distance between the two full strings as the last value computed.

This algorithm, an example of bottom-up dynamic programming, is discussed, with variants, in the 1974 article The String-to-string correction problem by Robert A. Wagner and Michael J. Fischer.[2]

A straightforward implementation, as pseudocode for a function LevenshteinDistance that takes two strings, s of length m, and t of length n, and returns the Levenshtein distance between them:

int LevenshteinDistance(char s[1..m], char t[1..n])
{
  // for all i and j, d[i,j] will hold the Levenshtein distance between
  // the first i characters of s and the first j characters of t;
  // note that d has (m+1)*(n+1) values
  declare int d[0..m, 0..n]

  clear all elements in d // set each element to zero

  // source prefixes can be transformed into empty string by
  // dropping all characters
  for i from 1 to m
    {
      d[i, 0] := i
    }

  // target prefixes can be reached from empty source prefix
  // by inserting every characters
  for j from 1 to n
    {
      d[0, j] := j
    }

  for j from 1 to n
    {
      for i from 1 to m
        {
          if s[i] = t[j] then
            d[i, j] := d[i-1, j-1]       // no operation required
          else
            d[i, j] := minimum
                    (
                      d[i-1, j] + 1,  // a deletion
                      d[i, j-1] + 1,  // an insertion
                      d[i-1, j-1] + 1 // a substitution
                    )
        }
    }

  return d[m, n]
}

Note that this implementation does not fit the definition precisely: it always prefers matches, even if insertions or deletions provided a better score. This is equivalent; it can be shown that for every optimal alignment (which induces the Levenshtein distance) there is another optimal alignment that prefers matches in the sense of this implementation.[3]

Two examples of the resulting matrix (hovering over a number reveals the operation performed to get that number):

k i t t e n
0 1 2 3 4 5 6
s 1 1 2 3 4 5 6
i 2 2 1 2 3 4 5
t 3 3 2 1 2 3 4
t 4 4 3 2 1 2 3
i 5 5 4 3 2 2 3
n 6 6 5 4 3 3 2
g 7 7 6 5 4 4 3
S a t u r d a y
0 1 2 3 4 5 6 7 8
S 1 0 1 2 3 4 5 6 7
u 2 1 1 2 2 3 4 5 6
n 3 2 2 2 3 3 4 5 6
d 4 3 3 3 3 4 3 4 5
a 5 4 3 4 4 4 4 3 4
y 6 5 4 4 5 5 5 4 3

The invariant maintained throughout the algorithm is that we can transform the initial segment s[1..i] into t[1..j] using a minimum of d[i,j]operations. At the end, the bottom-right element of the array contains the answer.

Proof of correctness

As mentioned earlier, the invariant is that we can transform the initial segment s[1..i] into t[1..j] using a minimum of d[i,j] operations. This invariant holds since:

  • It is initially true on row and column 0 because s[1..i] can be transformed into the empty string t[1..0] by simply dropping all i characters. Similarly, we can transform s[1..0] to t[1..j] by simply adding all j characters.
  • If s[i] = t[j], and we can transform s[1..i-1] to t[1..j-1] in k operations, then we can do the same to s[1..i] and just leave the last character alone, giving k operations.
  • Otherwise, the distance is the minimum of the three possible ways to do the transformation:
    • If we can transform s[1..i] to t[1..j-1] in k operations, then we can simply add t[j] afterwards to get t[1..j] in k+1 operations (insertion).
    • If we can transform s[1..i-1] to t[1..j] in k operations, then we can remove s[i] and then do the same transformation, for a total of k+1operations (deletion).
    • If we can transform s[1..i-1] to t[1..j-1] in k operations, then we can do the same to s[1..i], and exchange the original s[i] for t[j]afterwards, for a total of k+1 operations (substitution).
  • The operations required to transform s[1..n] into t[1..m] is of course the number required to transform all of s into all of t, and so d[n,m] holds our result.

This proof fails to validate that the number placed in d[i,j] is in fact minimal; this is more difficult to show, and involves an argument by contradiction in which we assume d[i,j] is smaller than the minimum of the three, and use this to show one of the three is not minimal.

Possible modifications

Possible modifications to this algorithm include:

  • We can adapt the algorithm to use less space, O(min(n,m)) instead of O(mn), since it only requires that the previous row and current row be stored at any one time.
  • We can store the number of insertions, deletions, and substitutions separately, or even the positions at which they occur, which is always j.
  • We can normalize the distance to the interval [0,1].
  • If we are only interested in the distance if it is smaller than a threshold k, then it suffices to compute a diagonal stripe of width 2k+1 in the matrix. In this way, the algorithm can be run in O(kl) time, where l is the length of the shortest string.[4]
  • We can give different penalty costs to insertion, deletion and substitution. We can also give penalty costs that depend on which characters are inserted, deleted or substituted.
  • By initializing the first row of the matrix with 0, the algorithm can be used for fuzzy string search of a string in a text.[5] This modification gives the end-position of matching substrings of the text. To determine the start-position of the matching substrings, the number of insertions and deletions can be stored separately and used to compute the start-position from the end-position.[6]
  • This algorithm parallelizes poorly, due to a large number of data dependencies. However, all the cost values can be computed in parallel, and the algorithm can be adapted to perform the minimum function in phases to eliminate dependencies.
  • By examining diagonals instead of rows, and by using lazy evaluation, we can find the Levenshtein distance in O(m (1 + d)) time (where d is the Levenshtein distance), which is much faster than the regular dynamic programming algorithm if the distance is small.[7]

Iterative with two matrix rows

It turns out that only two rows of the table are needed for the construction: the previous row and the current row (the one being calculated).

The Levenshtein distance may be calculated iteratively using the following algorithm:[8]

int LevenshteinDistance(string s, string t)
{
    // degenerate cases
    if (s == t) return 0;
    if (s.Length == 0) return t.Length;
    if (t.Length == 0) return s.Length;

    // create two work vectors of integer distances
    int[] v0 = new int[t.Length + 1];
    int[] v1 = new int[t.Length + 1];

    // initialize v0 (the previous row of distances)
    // this row is A[0][i]: edit distance for an empty s
    // the distance is just the number of characters to delete from t
    for (int i = 0; i < v0.Length; i++)
        v0[i] = i;

    for (int i = 0; i < s.Length; i++)
    {
        // calculate v1 (current row distances) from the previous row v0

        // first element of v1 is A[i+1][0]
        //   edit distance is delete (i+1) chars from s to match empty t
        v1[0] = i + 1;

        // use formula to fill in the rest of the row
        for (int j = 0; j < t.Length; j++)
        {
            var cost = (s[i] == t[j]) ? 0 : 1;
            v1[j + 1] = Minimum(v1[j] + 1, v0[j + 1] + 1, v0[j] + cost);
        }

        // copy v1 (current row) to v0 (previous row) for next iteration
        for (int j = 0; j < v0.Length; j++)
            v0[j] = v1[j];
    }

    return v1[t.Length];
}
Advertisements