Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Automated Authorship Attribution Using Advanced Signal Classification Techniques

  • Maryam Ebrahimpour,

    Affiliation School of Electrical and Electronic Engineering, The University of Adelaide, Adelaide, South Australia, Australia

  • Tālis J. Putniņš,

    Affiliations School of Electrical and Electronic Engineering, The University of Adelaide, Adelaide, South Australia, Australia, Stockholm School of Economics in Riga, Riga, Latvia, University of Technology Sydney, Sydney, New South Wales, Australia

  • Matthew J. Berryman,

    Affiliations School of Electrical and Electronic Engineering, The University of Adelaide, Adelaide, South Australia, Australia, SMART Infrastructure Facility, University of Wollongong, Wollongong, New South Wales, Australia

  • Andrew Allison,

    Affiliation School of Electrical and Electronic Engineering, The University of Adelaide, Adelaide, South Australia, Australia

  • Brian W.-H. Ng,

    Affiliation School of Electrical and Electronic Engineering, The University of Adelaide, Adelaide, South Australia, Australia

  • Derek Abbott

    derek.abbott@adelaide.edu.au

    Affiliation School of Electrical and Electronic Engineering, The University of Adelaide, Adelaide, South Australia, Australia

Abstract

In this paper, we develop two automated authorship attribution schemes, one based on Multiple Discriminant Analysis (MDA) and the other based on a Support Vector Machine (SVM). The classification features we exploit are based on word frequencies in the text. We adopt an approach of preprocessing each text by stripping it of all characters except a-z and space. This is in order to increase the portability of the software to different types of texts. We test the methodology on a corpus of undisputed English texts, and use leave-one-out cross validation to demonstrate classification accuracies in excess of 90%. We further test our methods on the Federalist Papers, which have a partly disputed authorship and a fair degree of scholarly consensus. And finally, we apply our methodology to the question of the authorship of the Letter to the Hebrews by comparing it against a number of original Greek texts of known authorship. These tests identify where some of the limitations lie, motivating a number of open questions for future work. An open source implementation of our methodology is freely available for use at https://github.com/matthewberryman/author-detection.

Introduction

The field of data mining is concerned with the extraction of unknown information and patterns using statistics, machine learning, and artificial intelligence on large scale data sets. Its application ranges from database searches to DNA analysis and text classification [1], [2].

Author attribution is the problem of identifying the authorship of given texts based on characteristics that are not known to the authors themselves. These characteristics are considered reliable because they are inaccessible to conscious manipulation and consistent – under the assumption that a given author has not acquired a mental disorder, such as Alzheimer's disease, where it is known to affect style [1], [3]. Author attribution is also based on the assumption that each author has his/her own writing style that acts as a fingerprint, and this is made possible as various measurable features in written text have been shown to be unchanged across a given author's range of writing genres over time [4][6]. In 1851, the mathematician Augustus de Morgan tried to determine the authorship of the Letter to the Hebrews, in the New Testament, by measuring word lengths. Since de Morgan's seminal work, many other methods have been developed [7][9]. In 1964, the first computer-assisted studies – as opposed to manual based methods – were performed by Mosteller and Wallace to investigate the authorship of the Federalist Papers [10]. Today rapid advances in machine learning, statistical, and software methods have led to computer-based automated systems for detection of authorship [11].

A key problem is to find features in written text that can be quantified in order to reflect an author's style. Once this is achieved, statistical or machine learning techniques can be used to analyse the similarity between pieces of texts. The fast growing areas of machine learning and statistical methods assist in processing the voluminous data, where traditional methods fail due to sparse and noisy data [12], [13].

In recent years, due to an increase in the amount of data in various forms including emails, blogs, messages on the internet and SMS, the problem of author attribution has received more attention. In addition to its traditional application for shedding light on the authorship of disputed texts in the classical literature, new applications have arisen such as plagiarism detection, web searching, spam email detection, and finding the authors of disputed or anonymous documents in forensics against cyber crime [14], [15]. Our focus, here, is the classical literature, and future work may be able to extend our methods to contemporary applications.

This paper is organized as follows. In the Methods section, the discriminant features that are utilized are discussed. Our classification approach, compares the use of Multiple Discriminant Analysis (MDA) with Support Vector Machines (SVM) [16]. These methods are thus briefly introduced. The effectiveness of our methods is investigated by applying them to a benchmark comprised of a known English corpus. Next we apply our methods to the disputed texts of the Federalist Papers, as this is a question that has been previously extensively studied. Finally, we revisit de Morgan's problem by applying our methods to the question of authorship of the Letter to the Hebrews in the New Testament.

Methods

Generally, there are three types of style markers for authorship attribution: lexical, syntactical, and structural features. Lexical features, for example, include the frequencies of words and letters. Syntactic features include punctuation and grammatically distinct classes of words, such as, articles and prepositions. Structural features use the overall organization of the whole text, such as the length or number of sentences and paragraphs. Since lexical features are easy to extract and the result is usually unambiguous, they play the most important role in computational stylometry [17][19].

A number of methods, in the literature, utilize several features and attempt to find the best subset via a feature selection algorithm, leading to accuracies of up to 99%. However this feature selection procedure may be corpus-dependent, thereby limiting applicability for general use [11].

The stylometry marker used in this study is a lexical feature: the frequency of key words. This is one of the best features discriminating between different authors [11], [20]. It is based on the occurrence of a series of non-contextual words such as articles and pronouns, for example ‘the’, ‘and’, ‘of’ in English. This category of words has little or no dependence on the topic or genre of the texts and the technique can easily be applied to different languages – thus it can be argued that these are useful classification features for different texts in order to determine authorship. A tool is needed to break the texts into tokens to then count and choose the most frequently occurring ones [21].

For a given authorship attribution problem, usually there is a group of candidate authors with an associated set of known authorship texts and there is a set of disputed texts requiring classification. Therefore, the data are divided into a training dataset and a disputed dataset. In order to find the set of function words, first, by means of a C++ software program, the number of occurrences of all words in the total dataset (i.e. training dataset plus disputed dataset) is counted. Next, these words are ranked from the most common to the least common, and the first words are chosen, where is a parameter of the classification algorithm. We shall call this set of words function words. Then the number of occurrences of each function word in each text is counted. For each text, the feature extraction algorithm outputs a vector containing the frequency of occurrences of the function words. This vector is normalized by dividing it by the total word count of the corresponding text, in order to remove the influence of different overall text sizes. The normalized vector is fed into to the classifier as the input.

We examine two powerful supervised learning approaches for performing data classification, Multiple Discriminant Analysis (MDA) and the Support Vector Machine (SVM). The same training dataset is input into both of them. To measure the accuracy of the methods, leave-one-out cross-validation (LOO-CV) is employed.

Multiple Discriminant Analysis

Multiple Discriminant Analysis (MDA) is a statistical technique designed to assign unknown cases to a known group by using predictor variables. The first step in this technique is to determine whether groups are significantly different with regards to the means of the predictor variables. If there are significant differences, then the variables can be used as discriminating variables. By using discriminating variables, MDA generates discriminant functions that minimize the training error, while maximizing the margin separating the data classes. The basic idea is to form the most possible distinct groups by maximizing the intergroup variance, while minimizing the pooled intragroup variance. If there are n groups in a training dataset, discriminant functions are generated. The discriminant function is given by:(1)where is the constant term, are the observed values of the style markers for each case and are the corresponding weights of those variables derived from the discriminant analysis [22][24].

In this study, we use the SPSS statistical analysis software package [22] to carry out the Multiple Discriminant Analysis. To prevent over-fitting, stepwise MDA is preferred. In stepwise MDA, at each step, all function word counts are evaluated to determine which variables are most effective to the prediction of group membership and those variables are added to the analysis. This process is iterated. It will stop when there is no new variable that contributes significantly to the discrimination between groups. So all the function word counts go into the analysis, but some of them may not contribute toward the discrimination between different authors. So these function word counts do not go into the discriminant function.

Here, MDA utilizes normalized function word frequencies as the discriminant variables and the authors as the grouping variables. The pre-classified training dataset is fed to the MDA and the centroid for each group, that is the mean value of the discriminant function scores, is found. The disputed text is assigned to the author's group that has the smallest Mahalanobis distance between the group's centroid and the disputed text. Mahalanobis distance is calculated by:(2)where x is the disputed text's vector of discriminant function scores, is the mean vector of discriminant function scores for an author's group, S is its covariance matrix, and T denotes the matrix transpose.

Support Vector Machine

The Support Vector Machine (SVM) is a supervised learning algorithm, which uses a training dataset and then classifies the data in question. It classifies data by finding the best hyperplane that separates clusters of features represented in an -dimensional space. Linear classification SVMs use a real-valued linear function , which assigns the -dimensional input vector to the positive class if , and to the negative class if . Here can be written as [16](3)where denotes the dot product, w is the weight vector that is the normal vector to the hyperplane and is bias or offset of the hyperplane from the origin. Basically a SVM is a two class or binary classifier. When there are more than two groups, the classification problem reduces to several binary classification problems. Multi-class SVMs classify data by finding the best hyperplanes that separate each pair of classes [16].

The geometrical interpretation of a SVM in an -dimensional space is an dimensional hyperplane that separates two groups. In this scheme the goal is to maximise the margins between the hyperplane and the two classes. In a more complicated situation, the points cannot be separated by linear functions. In this case, a SVM uses a kernel function to map the data into a higher dimensional space, where a hyperplane is calculated that optimally separates the data. Many different kernels have been developed, however, only a few work well in general. Aside from the linear SVM, common kernels are the polynomial kernel, the Radial Basis Function (RBF) kernel, and the sigmoid kernel as defined here [25], [26]:

There is no systematic methodology to predict the best kernel with the best parameters for a specific application [24]. In this paper, the best type of kernel and its parameters such as and r are found via an optimization procedure that maximizes the accuracy of classification.

Leave-One-Out Cross-Validation (LOO-CV)

Leave-one-out cross-validation (LOO-CV) is applied to evaluate the accuracy of both methods of classification. At every step, one text is left out from the training dataset and treated as a disputed author text [27]. The classification model is constructed on the remaining data and the algorithm classifies the left out text. The same procedure is applied to all of the training data set and the classification accuracy is calculated by:(4)

Results

We first investigate the performance of both the MDA and SVM methods using a dataset in which authors are known with certainty. For this dataset we use an English corpus of known authors as listed in Table 1. Next we apply our methods to two examples, in order to understand where some of the limitations and open questions lie. First, we examine the question of the disputed texts in the Federalist Papers – as we shall see this raises question of what happens when texts possibly are the result of collaboration, and suggests various items for future work. Second, we investigate and revisit de Morgan's author attribution problem of the New Testament, where the authorship of the Letter to the Hebrews has been debated by scholars since the third century. Here, we use the original Koine Greek texts in the New Treatment, illustrating how our approach is portable to non-English texts and highlighting a number of limitations for future study.

Benchmark Testing on an English Corpus of Known Authorship

To evaluate the accuracy and reliability of our methods, it is necessary to first test them on a set of texts with known authors, which do not have the limitations and deficiencies of the New Testament or Federalist Papers. This forms a benchmark for comparing the methods and evaluating the effect of limited text length or training data set size.

Our selected corpus of texts, in English, is obtained from the Project Gutenberg archives [28]. It contains 168 short stories by seven undisputed authors, namely, B. M. Bower, Richard Harding Davis, Charles Dickens, Sir Arthur Conan Doyle, Zane Grey, Henry James, and Andrew Lang. All of these authors wrote fictional literature in English in the same era (late 19th century to early 20th century). So, the genre and the period of time is reasonably uniform and the key discriminant feature is the authors' different styles [23]. Due to the differing lengths of the books, we truncate each of them to approximately the first 5000 words. The texts are listed in Table 1. Both the MDA and SVM classification methods are applied and the results are compared. Figure 1 shows the LOO-CV accuracy for both methods using different numbers of function words. The accuracy of both methods improved with every additional function word up to around 20 function words. Between 20-60 function words, there is still some improvement, but after that the accuracy plateaus.

thumbnail
Figure 1. Number of function words vs. LOO-CV accuracy.

The SVM uses a polynomial kernel with , and . Both MDA and SVM accuracies increase with an increasing number of words up to 100 words, but neither of them improved significantly after this point. These tests use the known English corpus given in Table 1.

https://doi.org/10.1371/journal.pone.0054998.g001

MDA Results

Table 2 shows the LOO-CV result of MDA for 7 authors and 100 function words. The numbers in the leading diagonal, show the correct assignments and this occurs in 96.4% of the cases.

thumbnail
Table 2. LOO-CV results for MDA classification of the English corpus.

https://doi.org/10.1371/journal.pone.0054998.t002

SVM Results

Author attribution problems, with a large number of datasets and several authors, cannot in most cases be resolved with a linear SVM. Choosing the type of kernel and kernel parameters are two significant factors to consider for obtaining the best result. Aside from the number of function words, the kernel's parameters can be optimized to obtain the best classification accuracy. Optimization is carried out to extract the best possible kernel for the given training data. The optimization process is a grid search with exponentially growing sequences of and , where and are varied using the values in the following sets: . This optimization first employs the LOO-CV technique to check each combination of parameter choices and then selects those parameters that result in the best LOO-CV accuracy. Next, the final model is trained on the whole training set using the chosen parameters [29].

The results are summarised in Table 3. With 95 function words, 92.2% of cases are classified correctly with LOO-CV. This represents an improvement of 12% compared to best results of the recent studies that adopted SVM classifiers [21], [30].

thumbnail
Table 3. LOO-CV results for SVM classification of the English corpus.

https://doi.org/10.1371/journal.pone.0054998.t003

This accuracy is quite good, but here there is a large number of words in each text and the size of training data per author is also large. In many real situations, texts can be rather short and there are few texts per author. Hence, it is necessary to evaluate the affect that limited training data has on the accuracy.

Affect of Training Dataset Size on MDA and SVM Accuracy

To investigate the affect of training dataset size, while other variables are kept constant, the number of texts per author is changed. There are different numbers of texts per author available in our dataset. The minimum number of texts per author is 14. Therefore in order to investigate how the dataset size affects accuracy, the classification procedure is repeated with 14 texts per author, then 13 texts per author, and so on, down to zero texts. At each step there are two groups of data, the group of texts that have been used as a training dataset, and the remainder that we call the hold-out dataset. As there are two different types of input data (training and hold-out), we can adopt two measures to calculate the accuracy at each step. The first measure is obtained by carrying out LOO-CV across the training dataset. The second method feeds the hold-out texts into the classifiers and it attributes each of the texts to one of the candidate authors. In this test case, we already know the actual authors of texts, so we compare the classifier results with the already-known authorship to find how many of them are correct. The accuracy will be the ratio of the number of correct attributions to the whole number of hold-out texts. Figures 2 and 3 summarize the results for both MDA and the SVM, respectively. In both graphs the accuracies using the training texts and the hold-out texts are shown.

thumbnail
Figure 2. Number of texts per author vs. accuracy of MDA classifier.

This graph investigates accuracy versus the size of the training dataset for the MDA case, with a fixed set of 100 function words, for the benchmark English corpus of known texts given in Table 1. The upper curve shows the LOO-CV accuracy of MDA, as a function of the number of author texts, by deliberately limiting the size of the training dataset. The lower curve shows the MDA accuracy that is obtained by inputting the hold-out texts to the classifier at each step.

https://doi.org/10.1371/journal.pone.0054998.g002

thumbnail
Figure 3. Number of texts per author vs. accuracy of SVM classifier.

This graph investigates accuracy versus the size of the training dataset for the SVM case, with a fixed set of 95 function words, for the benchmark English corpus of known texts given in Table 1. The SVM utilizes a polynomial kernel with and .

https://doi.org/10.1371/journal.pone.0054998.g003

The Federalist Papers

The Federalist Papers are a series of 85 political essays published under the name ‘Publius’ in 1788. At first, the real author(s) were a guarded secret, but scholars now accept that Alexander Hamilton, James Madison, and John Jay are the authors. After a while Hamilton and then Madison provided their own lists declaring the authorship [31], [32]. The difference between these two lists is that there are 12 essays that both Madison and Hamilton claimed individually for themselves. So 73 texts might be considered to have known author(s) while 12 are of disputed authorship. These 12 disputed authorship texts are essay numbers 49–58, 62 and 63. An early study carried out by Mosteller and Wallace (1964) concluded that all of the disputed essays were written by Madison, with the possible exception that essay number 55 might be written by Hamilton [10], [33]. Not all researchers agree with this conclusion. Some scholars also suggest that essay number 64, which is normally attributed to Jay, is written by Madison [31], so we also consider essay number 64 as a disputed text. In total, this gives us 13 disputed essays and 72 undisputed essays. Amongst the undisputed texts, 51 essays are written by Hamilton, 14 essays are written by Madison, and 4 essays are written by Jay. Three essays (numbers 18, 19, and 20) are products of collaboration between Hamilton and Madison [34], [35].

The texts are obtained from the Project Gutenberg Archives [28]. We put aside the three essays with collaborative authorship and take the remaining 69 essays as the training dataset. The same function word list (see Table 4) is used for our MDA and SVM classifiers. Because there are three authors, MDA produces two discriminant functions, that are shown in Figure 4. For the Federalist Papers of undisputed authorship, the LOO-CV accuracy is 97.1%, close to the LOO-CV accuracy for the SVM, 95.6%. In both methods the number of function words required to achieve the highest accuracy is 75 words. The assigned authors for disputed texts for both methods are summarized in Table 5 The MDA results in Table 5 are obtained by attributing each text to the author with the lowest Mahalanobis distance from the text. The Mahalanobis distances are shown in Table 6. A more critical approach is to only select an author based on lowest Mahalanobis distance, if for each contending author the Mahalanobis distance between the text and the contending author's centroid is greater than or equal to the longest distance (LD) between the contending author's known texts and the contending author's centroid. Such cases have a much higher degree of certainty, and are indicated with an asterisk in Table 5.

thumbnail
Figure 4. Canonical discriminant functions for the Federalist Papers.

This is the result of MDA on the Federalist Papers using two discriminant functions. Each point represents a text, which is plotted according to the values of its discriminant functions. Here, 75 function words are utilised, which yields the most accurate result. Open circles indicate known texts, asterisks indicate the 13 disputed texts in question, and the crosses indicate the centroids of the known author clusters.

https://doi.org/10.1371/journal.pone.0054998.g004

thumbnail
Table 5. The predicted authors for the 13 disputed Federalist Papers.

https://doi.org/10.1371/journal.pone.0054998.t005

thumbnail
Table 6. Mahalanobis distances from each Federalist Paper of disputed authorship to each author centroid.

https://doi.org/10.1371/journal.pone.0054998.t006

Without exception, all asterisked cases are supported by the SVM results in Table 5. Thus we can confirm the conclusion of Mosteller and Wallace [10] that Essay 55 is likely to be by Hamilton and essays 51, 53, and 62 are more likely to be by Madison. We are not able to make a conclusion regarding the remaining essays and suggest that future work investigate the possibility that Essays 49, 50, 52, 54, 56, 57, 58, 63, and 64 might be the result of some degree of collaboration.

The geometry of the MDA method allows us to develop an intuitive new simple method for assigning a likelihood measure to each authorship attribution, which takes into account not only how close a text is to its assigned author centroid but also how far away it is from the second nearest candidate author.

Let us imagine that a disputed text is close to the centroid of Author A, and the next-to-nearest centroid is that of Author B. For high likelihood of a match, we want the ratio to be as large as possible and certainly greater than unity, where is the Mahalanobis distance between the disputed text and the centroid of Author A and is the longest Mahalanobis distance between Author A's known texts and Author A's centroid. Coupled with this, we want the ratio to be as small as possible and certainly less than unity, where is the Mahalanobis distance between the disputed text and the centroid of Author B and is the longest Mahalanobis distance between Author B's known texts and Author B's centroid. Thus we define the likelihood of a match as given by,(5)

The certainty of a match increases as increases and it goes to zero when the two terms are equal, as expected. By applying this methodology to the Mahalanobis distances, in Table 6, we can re-allocate the authorship attribution and rank them according to the likelihood, , as shown in Table 7.

thumbnail
Table 7. Authorship attribution for the Federalist Papers ranked by likelihood, .

https://doi.org/10.1371/journal.pone.0054998.t007

As can be seen in Table 7, there is a relatively high likelihood that Essay 62 was written by Madison. Other assignments have less certainty, and in particular the last seven assignments that have likelihood close to or less than unity are much less certain. How can this be, given the very high accuracy of the MDA method on the English corpus? A likely scenario is that the lower ranked texts are the products of a greater degree of collaboration between the authors, and this remains an open question for future investigation.

The Letter to the Hebrews

Traditionally the Letter to the Hebrews is attributed to the Apostle Paul, also known as Saul of Tarsus. After the third century AD many scholars debated this idea. Three further suggestions for authorship of the Letter to the Hebrews are Barnabas, Luke the Evangelist, and Clement of Rome [36]. Luke and Paul are amongst the authors of the New Testament, Clement was an apostolic father and Barnabas was an early Christian disciple. Works of these four possible authors with three other New Testament authors including Mark, Matthew, John and another apostolic father, Ignatius of Antioch, are tested to determine the most likely author. All of these selected texts are written in the first century. The function word method is used to obtain the set of the stylometry vectors, and both MDA and SVM are used for classification.

The New Testament and non-canonical texts are obtained from Society of Biblical Literature [37] and Christian Classical Ethereal Library [38], respectively. All the source texts are in Koine Greek and we pre-process them to remove any headings, verse numbers, and punctuation introduced by modern editors. Note that the original Koine Greek has no punctuation marks and no accents. As our software handles only the ASCII characters a-z and space, we transliterate the Greek text into our required ASCII set using the look-up table given in Table 8. A limited number of certainly known author texts are available from the first century. The length of text for each author varies from 5,000 words to 50,000 words. Based on our experiments, an equal length of text per author, gives improved accuracy. A possible solution might be to truncate the texts to make them all of equal length, however this is problematic. This is because we have limited data size and need to utilize and extract any information hidden in all the available data. To address this difficulty, the known texts of each author are concatenated together and divided by four. The length of each text for different authors now varies between 1,600 to 10,000 words per text, which reduces the ratio of largest to smallest text from 10 to 6.25. Table 9 lists the names of the texts used for each author along with their word lengths. The vector of the frequency of occurrences of the function words is normalized by dividing by the number of words per text. The normalized vectors are now ready for entry into the classification stage. This method alleviates the problem of different dataset sizes.

thumbnail
Table 9. Source Texts from New Testament and Apostolic Fathers in Koine Greek.

https://doi.org/10.1371/journal.pone.0054998.t009

Applying stepwise MDA to the training dataset gives an LOO-CV accuracy of 90.6%, which is quite good for such a small dataset with several authors. Figure 5 shows the first three discriminant functions for all texts.

thumbnail
Figure 5. First three canonical discriminant functions for New Testament authors and Apostolic Fathers.

This plot shows the MDA results for the Greek texts, in order to determine which author's cluster of texts is closest to the Letter to the Hebrews. We use seven discriminant functions in this analysis, however, only the first three discriminant functions are plotted here for illustrative purposes. There are four data points for each author, as all their known texts are concatenated and divided by four.

https://doi.org/10.1371/journal.pone.0054998.g005

Note that there are seven discriminant functions and the plot shows three of them, for illustrative purposes only. However, in order to calculate actual Mahalanobis distances, we consider all seven functions. Table 10 shows the Mahalanobis distance between the Letter to the Hebrews and each of the author centroids. Note that Table 10 also shows the longest Mahalanobis distances of each author's known texts to the respective group centroid. The results show that whilst the Letter to the Hebrews is indeed closest to Paul, it is nevertheless further away than all the undisputed texts of Paul. This illustrates the difficulty that underlies the centuries of disagreement between scholars on the authorship of the Letter to the Hebrews. The second closest author is Luke, who is also one of the mooted authors. Moreover, using a SVM with an optimized polynomial kernel, an LOO-CV classification accuracy of 87.5% is obtained and the Letter to Hebrews is attributed to Luke. In fact, an early statement on the authorship of the Letter to Hebrews suggested that Paul initially wrote it in Hebrew and Luke translated it into Greek [39]. So one possible hypothesis is that we are seeing the effect of translation on the style of an author and this is consistent with the results of our analysis.

thumbnail
Table 10. The Mahalanobis distance between the Letter to the Hebrews and the centroids of authors.

https://doi.org/10.1371/journal.pone.0054998.t010

Limitations of Study

A key assumption underlying all attempts at automated authorship attribution is that the authors in question write with a consistent style. It is known that style can dramatically change if a mental disorder, such as Alzheimer's disease, is acquired. A limitation, in the specific case of the Letter to the Hebrews is the small number of known-authored texts. Could there be other authors in existence that are closer to the Letter to the Hebrews than Paul? There are many extra-canonical texts in existence, and future work must exhaustively check these when they become available in electronic format. Whilst the likelihood function we adopted is simple and provides relative ranking, it is without characteristic scale and is not appropriate for absolute comparisons from corpus to corpus. Also it implicitly approximates a hyperbolic distribution to the data and assumes the points are spread in a circular symmetric fashion rather than in an ellipsoid. Thus, future work may be carried out in order to further elaborate the likelihood function.

Future Work

In this study we focussed on stripping the text of any punctuation, as the developed classification methods are then readily portable to other languages such as Koine Greek. Thus when we tested our techniques using known English texts, we also stripped the texts of all punctuation to get them into the form of interest. Future work that specifically focusses on the authorship of English texts may benefit from including punctuation, as it possesses style information that may assist in characterizing an author. When extending the work to classify authorship of emails and SMS messages, it may be of greater importance to not only include all natural punctuation but also numbers, emoticons, letter case, redundant spaces, and even idiosyncratic errors. Future work may also investigate different types of feature vectors for classification, other than word frequency, such as word recurrence interval (WRI) [2]. A potential advantage of WRI is that it removes any genre-dependence due to the specific use of words – as it measures how words cluster, whilst disregarding the actual words used.

In regard to elaborating the likelihood weighting for ‘soft’ classification of each text, possible future directions may consider the use least squares optimisation [40][42] or fuzzy c-means (FCM) methods [43][45].

Conclusion

In conclusion, we develop a methodology for automated detection of authorship, using the frequency of function words as classification features. There are three critical steps: (i) preprocessing the texts, (ii) extracting classification features, and (iii) performing classification. In regards to the third step, this work compares the performance of a MDA classifier to that of a SVM classifier. Whilst the accuracy of both methods is better than 90%, the SVM is somewhat limiting as it provides only binary decisions. On the other hand the MDA approach allows more flexibility, and enables us to develop a method of ranking authorship attribution according to likelihood. For future work, the MDA approach may therefore be more useful as a method for investigating the degree of collaboration between authors.

With regards to the disputed essays of the Federalist Papers, both the MDA and SVM approaches confirm present consensus of scholarship that Essay 62 is indeed written by James Madison. Furthermore the MDA method reveals that the match between Madison and Essay 62 has the highest degree of certainty out of all the 13 disputed essays.

On the question of authorship of the Letter to the Hebrews, we find using the MDA method that texts of the Apostle Paul are the closest in style, followed second by Luke the Evangelist. This would appear to favour the traditional belief that Paul is the author. However, the corresponding Mahalanobis distance is longer than the furthest distance between any of Paul's known texts and their stylometric average, suggesting the link between Paul and the Letter to the Hebrews is weak.

Thus there are two hypotheses to investigate in future work: (i) could the Letter to the Hebrews have been originally written in Hebrew by Paul, and then later translated into Greek by Luke? or, (ii) could there be a further extra-canonical author that is closer in style to the Letter to the Hebrews? At present, only a small subset of existing Koine Greek texts are available in electronic format, and as further Koine texts become available in the future, more exhaustive tests can be carried out.

Additional Information

Software.

The LIBSVM library of SVM routines is publicly available [46].

Acknowledgments

Thanks are due to François-Pierre Huchet, ITII Pays de la Loire, Nantes, France, for assistance with software coding. We warmly thank J. José Alviar, Professor of Dogmatic Theology, Universidad de Navarra, Spain, for useful discussions and assistance with locating extracanonical source texts.

Author Contributions

Proofed the paper: TP MJB AA BWHN DA. Conceived the project: DA. Conceived and designed the experiments: ME TP DA. Performed the experiments: ME. Analyzed the data: ME TP MJB AA BWHN DA. Contributed reagents/materials/analysis tools: ME TP MJB AA BWHN DA. Wrote the paper: ME.

References

  1. 1. Sabordo M, Chai SY, Berryman MJ, Abbott D (2004) Who wrote the Letter to the Hebrews? – Data mining for detection of text authorship. In: Proc. SPIE Smart Structures, Devices, and Systems 5649, Sydney, Australia, Dec. 12–15: 513–524.
  2. 2. Berryman MJ, Allison A, Abbott D (2003) Statistical techniques for text classification based on word recurrence intervals. Fluctuation and Noise Letters 3: L1–L10.
  3. 3. Hirst G, Feng VW (2012) Changes in style in authors with Alzheimer's disease. English Studies 93: 357–370.
  4. 4. Baayen H, van Halteren H, Neijt A, Tweedie F (2002) An experiment in authorship attribution. 6es Journées Internationales d'Analyse Statistique de Donnes Textuelles 1: 69–79.
  5. 5. Sayoud H (2012) Author discrimination between the Holy Quran and Prophet's statements. Literary and Linguistic Computing 1: 1–18.
  6. 6. Juola P (2012) Large-scale experiments in authorship attribution. English Studies 93: 275–283.
  7. 7. Alviar JJ (2008) Recent advances in computational linguistics and their application to Biblical studies. New Testament Studies 54: 139–150.
  8. 8. Luyckx K, Daelemans W (2008) Authorship attribution and verification with many authors and limited data. In: Proc. 22nd International Conference on Computational Linguistics – Volume 1: 513–520.
  9. 9. Ortuño M, Carpena P, Bernaola-Galván P, Muñoz E, Somoza AM (2002) Keyword detection in natural languages and DNA. Europhysics Letters 57: 759–764.
  10. 10. Mosteller F, Wallace DL (1964) Inference and disputed authorship: The Federalist. Addison-Wesley.
  11. 11. Stamatatos E (2009) A survey of modern authorship attribution methods. Journal of the American Society for Information Science and Technology 60: 538–556.
  12. 12. Koppel M, Schler J, Argamon S, Winter Y (2012) The fundamental problem of authorship attribution. English Studies 93: 284–291.
  13. 13. Mustafa TK, Mustapha N, Azmi MA, Sulaiman NB (2010) Dropping down the maximum item set: Improving the stylometric authorship attribution algorithm in the text mining for authorship investigation. Journal of Computer Science 6: 235–243.
  14. 14. Estival D (2008) Author attribution with email messages. Journal of Science, Vietnam National University 1: 1–9.
  15. 15. Chen X, Hao P, Chandramouli R, Subbalakshmi K (2011) Authorship similarity detection from email messages. Machine Learning and Data Mining in Pattern Recognition 6871: 375–386.
  16. 16. Cristianini N, Shawe-Taylor J (2000) An Introduction to Support Vector Machines and Other Kernelbased Learning Methods. Cambridge Univ Pr.
  17. 17. Iqbal F, Hadjidj R, Fung B, Debbabi M (2008) A novel approach of mining write-prints for authorship attribution in e-mail forensics. Digital Investigation 5: S42–S51.
  18. 18. Tsimboukakis N, Tambouratzis G (2010) A comparative study on authorship attribution classification tasks using both neural network and statistical methods. Neural Computing & Applications 19: 573–582.
  19. 19. Savoy J (2013) Authorship attribution based on a probabilistic topic model. Information Processing & Management 49: 341–354.
  20. 20. Zhao Y, Zobel J (2005) Effective and scalable authorship attribution using function words. In: Proc. Second AIRS Asian Information Retrieval Symposium. Springer, 174–189.
  21. 21. Diederich J, Kindermann J, Leopold E, Paass G (2003) Authorship attribution with support vector machines. Applied Intelligence 19: 109–123.
  22. 22. Green S, Salkind N (2005) Using SPSS for Windows and Macintosh: Understanding and analysing data. Upper Saddle River, NJ: Prentice-Hall.
  23. 23. Putniņš TJ, Signoriello DJ, Jain S, Berryman MJ, Abbott D (2005) Advanced text authorship detection methods and their application to Biblical texts. In: Axel Bender, Brisbane, Queensland, Australia, December Proc. SPIE: Complex Systems 6039, ed. 11–14: 1–13.
  24. 24. Ayat NN, Cheriet M, Suen CY (2005) Automatic model selection for the optimization of SVM kernels. Pattern Recognition 38: 1733–1745.
  25. 25. Lewicki P, Hill T (2005) Statistics: Methods and Applications. StatSoft, Inc.
  26. 26. Chang CC, Lin CJ (2011) LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology (TIST) 2: 27.
  27. 27. Picard RR, Cook RD (1984) Cross-validation of regression models. Journal of the American Statistical Association 79: 575–583.
  28. 28. Project Gutenberg Website. Available: http://promo.net/pg/, accessed 2012 Aug.
  29. 29. Hsu CW, Chang CC, Lin CJ (2010). A practical guide to support vector classification (Technical report), Available: http://www.csie.ntu.edu.tw/~cjlin, accessed 2012 Jun.
  30. 30. Joachims T (1998) Text categorization with support vector machines: Learning with many relevant features. Machine Learning: ECML-98 1: 137–142.
  31. 31. Adair D (1944) The authorship of the disputed Federalist Papers: Part II. The William and Mary Quarterly, Third Series 1: 235–264.
  32. 32. Holmes DI, Forsyth RS (1995) The Federalist revisited: New directions in authorship attribution. Literary and Linguistic Computing 10: 111–127.
  33. 33. Tweedie FJ, Singh S, Holmes DI (1996) Neural network applications in stylometry: The Federalist Papers. Computers and the Humanities 30: 1–10.
  34. 34. Bosch RA, Smith JA (1998) Separating hyperplanes and the authorship of the disputed Federalist Papers. The American Mathematical Monthly 105: 601–608.
  35. 35. Fung G (2003) The disputed Federalist Papers: SVM feature selection via concave minimization. In: Proc. 2003 ACM Conference on Diversity in Computing. ACM, pp. 42–46.
  36. 36. Wallace DB (2000) Hebrews: Introduction, Argument, and Outline. Biblical Studies Press.
  37. 37. SBL Greek New Testament Website. Available: http://sblgnt.com/, accessed 2012 Jun.
  38. 38. Cristian Classics Ethereal Library Website. Available: http://www.ccel.org/, accessed 2012 Jun.
  39. 39. Eusebius (1890) Church History. Buffalo, NY: Christian Literature Publishing Co. Translated by Arthur Cushman McGiffert.
  40. 40. Gray D, Krieg M, Peake M (1998) Estimation of the parameters of mixed processes by least squares optimisation. In: 4th International Conference in Optimisation: Techniques and Applications, (ICOTA'98), Perth, Australia, July. pp. 891–898.
  41. 41. Krieg M, Gray D (2001) Comparisons of pmh and pls trackers on real and simulated data. In: Defence Applications of Signal Processing, Eds: D. Cochran, W. Moran, and L. White, Elsevier. 126–133.
  42. 42. Gray D, Krieg M, Peake M (1998) Tracking the parameters of mixed processes by least squares optimisation. In: Workshop Commun GdR ISIS (GT1) and NUWC: Approches Probabilistes Pour l'Extraction Multipistes, Paris, France, 9–10 November, art. no. 5.
  43. 43. Theodoridis S, Koutroumbas K (2008) Pattern Recognition. Academic Press.
  44. 44. Dunn J (1973) A fuzzy relative of the isodata process and its use in detecting compact. J Cybern 3: 32–57.
  45. 45. Cannon R, Dave J, Bezdek J (1986) Efficient implementation of the fuzzy c-means clustering algorithms. IEEE Trans Pattern Analysis and Machine Intelligence 8: 248–255.
  46. 46. CSIE Website. Available: http://www.csie.ntu.edu.tw/~cjlin, accessed 2012 Jun.