Rapid Development for the History Web

This year I was privileged to design and teach an experimental (and somewhat improvisational) course spanning multiple disciplines. It is one of a small number of Digital History courses offered at the undergraduate level in the United States and, to the best of my knowledge, the only course of its kind to require students to conceive, design, and execute an original historical website in a matter of weeks. Beginning with a short overview of the history of computing, the major part of the course deals with current debates and problems confronting historians in the Digital Age. Students read theoretical literature on topics such as the gender divide, big data, and the democratization of knowledge, as well as digital history projects spanning the range of human experience, from ancient Greece to modern Harlem. Guest speakers discussed the complexities of database design and the legal terrain of fair use, open access, and privacy. The complete syllabus is available here.

Unusually for a humanities class, the students engaged in a series of labs to build and test digital literacy skills. This culminated in a final project asking them to select, organize, and interpret a body of original source material. I solicited ideas and general areas of interest for the project and posted a list to the class blog that grew over the course of the semester. Students expressed interest in newspaper databases, amateur history and genealogy, text mining and topic modeling, local community initiatives, and communications, culture, and new media. I thought it was important to find a project that would speak to every student’s interest while not playing favorites with the subject matter. We considered a plan to scan and present an archive of old student and university publications. I thought it was a good idea. On the other hand, it would have involved at lot of time-consuming rote digitization, access to restricted library collections, and sharing of limited scanning facilities.

Ultimately, the students decided to build an interactive database of runaway advertisements printed in colonial and early national Connecticut. This seemed to satisfy every major area of interest on our list and, when I polled the class, there was broad consensus that it would be an interesting experiment. The project grew out of an earlier assignment, which asked students to review websites pertaining to the history of slavery and abolition. It also allowed me to draw on my academic background researching and teaching about runaways. We settled on Connecticut because it is a relatively small state with a small population, as well as home to the nation’s oldest continuously published newspaper. At the same time, it was an important colonial outpost and deeply involved in the slave trade and other forms of unfree labor on a variety of fronts.

RunawayCT_projectDrawing on the site reviews submitted earlier in the term, we brainstormed some ideas for what features would and would not work on our site. The students were huge fans of Historypin, universally acclaimed for both content and interface. So we quickly agreed that the site should have a strong geospatial component. We also agreed that the site should have a focus on accessibility for use in classrooms and by researchers as well as the general public. Reading about History Harvest, OutHistory.org, and other crowdsourced community heritage projects instilled a desire to reach out to and collaborate with local educators. Settling on a feasible research methodology was an ongoing process. Although initially focused on runaway slaves, I gently encouraged a broader context. Thus the final site presents ads for runaway children, servants, slaves, soldiers, wives, and prisoners and ties these previously disparate stories into a larger framework. Finally, a student who had some experience with web design helped us to map a work plan for the project based on the Web Style Guide by Patrick Lynch and Sarah Horton.

Since there were students from at least half a dozen different majors, with vastly different interests and skill sets, we needed a way to level the playing field, and specialized work groups seemed like a good way to do this. We sketched out the groups together in class and came up with four: Content, CMS, Outreach, and Accessibility. The Content Team researched the historiography on the topic and wrote most of the prose content, including the transcriptions of the advertisements. They used Readex’s America’s Historical Newspapers database to mine for content and collated the resulting data using shared Google Docs. The CMS Team, composed mostly of computer science majors, focused on building the framework and visual feel for the site. Theoretically they could have chosen any content management system, although I pushed for Omeka and Neatline as probably the best platforms for what we needed to do. The Outreach Team created a twitter feed and a video documentary and solicited input about the site from a wide range of scholars and other professionals. The Accessibility Officer did extensive research and testing to make sure the site was fully compliant with open web standards and licenses.

The group structure had benefits and drawbacks. I tried to keep the system as flexible as possible. I insisted that major decisions be made by consensus and that group members post periodic updates to the class blog so that we could track our progress. Some students really liked it and floated around between different groups, helping out as necessary. I also received criticism on my evaluations from students who felt boxed in and complained that there was too much chaos and not enough communication between the groups. So I will probably rethink this approach in the future. One evaluator suggested that I ditch the collaborative project altogether and ask each student to create their own separate site, but that seems even more chaotic. In my experience, there are always students who want less group work and students who want more, and it is an ongoing struggle to find the right balance for a given class.

The assignment to design and publish an original historical site in a short amount of time, with no budget, almost no outside support, and only a general sense of what needs to be done is essentially a smaller, limited form of crowdsourcing. More accurately, it is a form of rapid development, in which the transition between design and production is extremely fast and highly mutable. Rapid development has been a mainstay of the technology industry for a while now. In my class, I cited the example of One Week | One Tool, in which a small group of really smart people get together and produce an original digital humanities tool. If they could do that over the course of a single week, I asked, what could an entire class of really smart people accomplish in a month?

The result, RunawayCT.org, is not anything fancy, but it is an interesting proof of concept. Because of the hit-or-miss nature of OCR on very old, poorly microfilmed newspapers, we could not get a scientific sample of advertisements. Figuring out how to properly select, categorize, format, and transcribe the data was no mean feat – although these are exactly the kinds of problems that scholarly history projects must confront on a daily basis. The Outreach Team communicated with the Readex Corporation throughout the project, and their representatives were impressively responsive and supportive of our use of their newspaper database. When the students asked Readex for access to their internal API so that we could automate our collection of advertisements, they politely declined. Eventually, I realized that there were literally thousands of ads, only a fraction of which are easily identified with search terms. So our selection of ads was impressionistic, with some emphasis on chronological breadth and on ads that were especially compelling to us.

upside downDespite the students’ high level of interest in, even fascination with, the content of the ads, transcribing them could be tedious work. I attempted to apply OCR to the ad images using ABBYY FineReader and even digitized some newspaper microfilm reels to create high resolution copies, but the combination of eighteenth-century script and ancient, blurry microfilm rendered OCR essentially useless. Ads printed upside down, faded ink, and text disappearing into the gutters between pages were only a few of the problems with automatic recognition. At some point toward the end, I realized that my Mac has a pretty badass speech-to-text utility built into the OS. So I turned it on, selected the UK English vocabulary for the colonial period ads, and plugged in an old Rock Band mic (which doubles as an external USB microphone). Reading these ads, which are almost universally offensive, aloud in my room was a surreal experience. It was like reading out portions of Mein Kampf or Crania Americana, and it added a new materiality and gravity to the text. I briefly considered adding an audio component to the site, but after thinking about it for a while, in the cold light of day, I decided that it would be too creepy. One of my students pointed out that a popular educational site on runaway slaves is accompanied by the sounds of dogs barking and panicked splashing through rivers. And issues like these prompted discussion about what forms of public presentation would be appropriate for our project.

I purposely absented myself from the site design because I wanted the students to direct the project and gain the experience for themselves. On the other hand, if I had inserted myself more aggressively, things might have moved along at a faster pace. Ideas such as building a comprehensive data set, or sophisticated topic modelling, or inviting the public to participate in transcribing and commenting upon the documents, had to be tabled for want of time. Although we collected some historical maps of Connecticut and used them to a limited extent, we did not have the opportunity to georeference and import them into Neatline. This was one of my highest hopes for the project, and I may still attempt to do it at some point in the future. I did return to the site recently to add a rudimentary timeline to our exhibit. Geocoding took only minutes using an API and some high school geometry, so I assumed the timeline would be just as quick. Boy, was I wrong. To accomplish what I needed, I had to learn some MySQL tricks and hack the underlying database. I also had to make significant alterations to our site theme to get everything to display correctly.

One of the biggest challenges we faced as a class was securing a viable workspace for the project. Technology Services wanted us to use their institutional Omeka site, with little or no ability to customize anything, and balked at the notion of giving students shell access to their own server space. Instead, they directed us to Amazon Web Services, which was a fine compromise, but caused delays getting our system in place and will create preservation issues in the future. As it is now, the site will expire in less than a year, and when I asked, there was little interest in continuing to pay for the domain. I was told saving the site would be contingent on whether or not it is used in other classes and whether it “receives decent traffic.” (Believe it or not, that’s a direct quote.) One wonders how much traffic most student projects receive and what relationship that should bear to their institutional support.

Although not a finely polished gem, RunawayCT.org demonstrates something of the potential of rapid development for digital history projects. As of right now, the site includes almost 600 unique ads covering over half a century of local history. At the very least, it has established a framework for future experimentation with runaway ads and other related content. Several of the students told me they were thrilled to submit a final project that would endure and be useful to the broader world, rather than a hastily-written term paper that will sit in a filing cabinet, read only by a censorious professor. Given all that we accomplished in such a short time span, I can only guess what could be done with a higher level of support, such as that provided by the NEH or similar institutions. My imagination is running away with the possibilities.

Cross-posted at HASTAC

History Leaks

I am involved in a new project called History Leaks. The purpose of the site is to publish historically significant public domain documents and commentaries that are not available elsewhere on the open web. The basic idea is that historians and others often digitize vast amounts of information that remains locked away in their personal files. Sharing just a small portion of this information helps to increase access and draw attention to otherwise unknown or underappreciated material. It also supports the critically important work of archives and repositories at a time when these institutions face arbitrary cutbacks and other challenges to their democratic mission.

I hope that you will take a moment to explore the site and that you will check back often as it takes shape, grows, and develops. Spread the word to friends and colleagues. Contributions are warmly welcomed and encouraged. Any feedback, suggestions, or advice would also be of value. A more detailed statement of purpose is available here.

Combine JPEGs and PDFs with Automator

leninchristmasLike most digital historians, my personal computer is packed to the gills with thousands upon thousands of documents in myriad formats and containers: JPEG, PDF, PNG, GIF, TIFF, DOC, DOCX, TXT, RTF, EPUB, MOBI, AVI, MP3, MP4, XLSX, CSV, HTML, XML, PHP, DMG, TAR, BIN, ZIP, OGG. Well, you get the idea. The folder for my dissertation alone contains almost 100,000 discrete files. As I mentioned last year, managing and preserving all of this data can be somewhat unwieldy. One solution to this dilemma is to do our work collaboratively on the open web. My esteemed colleague and fellow digital historian Caleb McDaniel is running a neat experiment in which he and his student assistants publish all of their research notes, primary documents, drafts, presentations, and other material online in a wiki.

Although I think there is a great deal of potential in projects like these, most of us remain hopelessly mired in virtual reams of data files spread across multiple directories and devices. A common issue is a folder with 200 JPEGs from some archival box or a folder with 1,000 PDFs from a microfilm scanner. One of my regular scholarly chores is to experiment with different ways to sort, tag, manipulate, and combine these files. This time around, I would like to focus on a potential solution for the latter task. So if, like most people, you have been itching for a way to compile your entire communist Christmas card collection into a single handy document, today is your lucky day. Now you can finally finish that article on why no one ever invited Stalin over to their house during the holidays.

Combining small numbers of image files or PDFs into larger, multipage PDFs is a relatively simply point-and-click operation using Preview (for Macs) or Adobe Acrobat. But larger, more complex operations can become annoying and repetitive pretty quickly. Since I began my IT career on Linux and since my Mac runs on a similar Unix core, I tend to fall back on shell scripting for exceptionally complicated operations. The venerable, if somewhat bloated, PDFtk suite is a popular choice for the programming historian, but there are plenty of other options as well. I’ve found the pdfsplit and pdfcat tools included in the latter package to be especially valuable. At the same time, I’ve been trying to use the Mac OS X Automator more often, and I’ve found that it offers what is arguably an easier, more user friendly interface, especially for folks who may be a bit more hesitant about shell scripting.

What follows is an Automator workflow that takes an input folder of JPEGs (or PDFs) and outputs a single combined PDF with the same name as the containing folder. It can be saved as a service, so you can simply right-click any folder and run the operation within the Mac Finder. I’ve used this workflow to combine thousands of research documents into searchable digests.

Step 1: Open Automator, create a new workflow and select the “Service” template. At the top right, set it to receive selected folders in the Finder.

Step 2: Insert the “Set Value of Variable” action from the library of actions on the left. Call the variable “Input.” Below this, add a “Run Applescript” action and paste in the following commands:

on run {input}
tell application "Finder"
set FilePath to (container of (first item of input)) as alias
end tell
return FilePath
end run

Add another “Set Value of Variable” action below this and call it “Path.” This will establish the absolute path to the containing folder of your target folder for use later in the script. If this is all getting too confusing, just hang it there. It will probably make more sense by the end.

combinesmallStep 3: Add a “Get Value of Variable” action and set it to “Input.” Click on “Options” on the bottom of the action and select “Ignore this action’s input.” This part is crucial, as you are starting a new stage of the process.

Step 4: Add the “Run Shell Script” action. Set the shell to Bash and pass input “as arguments.” Then paste the following code:

echo ${1##*/}

I admit that I am cheating a little bit here. This Bash command will retrieve the title of the target folder so that your output file is named properly. There is probably an easier way to do this using Applescript, but to be honest I’m just not that well versed in Applescript. Add another “Set Value of Variable” action below the shell script and call it “FolderName” or whatever else you want to call the variable – it really doesn’t matter.

Step 5: Add another “Get Value of Variable” action and set it to “Input.” Click on “Options” on the bottom of the action and select “Ignore this action’s input.” Once again, this step is crucial, as you are starting a new stage of the process.

Step 6: Add the action to “Get Folder Contents,” followed by the action to “Sort Finder Items.” Set the latter to sort by name in ascending order. This will assure that the pages of your output PDF are in the correct order, the same order in which they appeared in the source folder.

Step 7: Add the “New PDF from Images” action. This is where the actual parsing of the JPEGs will take place. Save the output to the “Path” variable. If you don’t see this option on the list, go to the top menu and click on View –> Variables. You should now see a list of variables at the bottom of the screen. At this point, you can simply drag and drop the “Path” variable into the output box. Set the output file name to something arbitrary like “combined.” If you want to combine individual PDF files instead of images, skip this step and scroll down to the end of this list for alternative instructions.

Step 8: Add the “Rename Finder Items” action and select “Replace Text.” Set it to find “combined” in the basename and replace it with the “FolderName” variable. Once again, you can drag and drop the appropriate variable from the list at the bottom of the screen. Save the workflow as something obvious like “Combine Images into PDF,” and you’re all set. When you right-click on a folder of JPEGs (or other images) in the Finder, you should be able to select your service. Try it out on some test folders with a small number of images to make sure all is working properly. The workflow should deposit your properly-named output PDF in the same directory as the source folder.

To combine PDFs rather than image files, follow steps 1-6 above. After retrieving and sorting the folder contents, add the “Combine PDF Pages” action and set it to combine documents by appending pages. Next add an action to “Rename Finder Items” and select “Name Single Item” from the pull-down menu. Set it to name the “Basename only” and drag and drop the “FolderName” variable into the text box. Lastly, add the “Move Finder Items” action and set the location to the “Path” variable. Save the service with a name like “Combine PDFs” and you’re done.

This procedure can be modified relatively easily to parse individually-selected files rather than entire folders. A folder action worked best for me, though, so that’s what I did. Needless to say, the containing folder has to be labeled appropriately for this to work. I find that I’m much better at properly naming my research folders than I am at naming all of the individual files within them. So, again, this process worked best for me. A lot can go wrong with this workflow. Automator can be fickle, and scripting protocols are always being updated and revised, so I disavow any liability for your personal filesystem. I also welcome any comments or suggestions to improve or modify this process.

The $14 Million Question

Yesterday a copy of the Bay Psalm Book, the first book composed and printed in British North America, sold at auction for a record-breaking $14.16 million. Members of Boston’s Old South Church decided to sell one of their two copies to help fund their cash-strapped congregation, and while the amount fell short of the auction house estimate of $15-30 million, it is certainly enough to buy a whole lot of snazzy sermons, baptismal fonts, and really uncomfortable pews. A number of talented and distinguished historians, including Jill Lepore and David Spadafora, have weighed in on the broader context and significance of this standard devotional text, printed in the fledgling Massachusetts Bay Colony in 1640. Amid all of the excellent scholarly analysis and public humanities work, however, no one seems to be asking the big question: why is someone willing to pay millions of dollars for a book that anyone with an internet connection can get for free? In an age of increasingly universal digitization, when nearly every major print publication prior to 1923 is available online, why do some public domain printed books sell for princely sums?

In 1947, when the last Bay Psalm Book sold at auction for $151,000, a researcher needed to physically travel to a major library in order to view an original copy. In the Northeast, there were plenty of optionsYale, Harvard, Brown, the Boston Public Library, the New York Public Library, the American Antiquarian Society. South of New York City, there was nothing. West of the Appalachians, the only choice was the private Huntington Library in California – and their copy was missing seven pages, including the title page. The only copy available to researchers outside of the United States was at the Bodleian Library at the University of Oxford. Bibliophiles published facsimile editions as early as 1862, but their production and circulation were limited. Depending on how far one had to travel, and factoring in layover times, scheduling, family and work obligations, and local arrangements, the onetime cost of consulting this small piece of religious history could be enormous. Gripes about the digital divide notwithstanding, the analog divide was and is much worse.

In 2013, copies of the the Bay Psalm Book are everywhere – the Library of Congress, the World Digital Library, even the Old South Church. In fact, almost every single book, pamphlet, and broadside published in colonial America is available for free online or at a participating library through Readex’s Early American Imprints series. Yale’s copy of the Bay Psalm Book, which, coincidentally, was the one purchased at the aforementioned auction in 1947, is available in full here. That book sold for the equivalent of about $1.5 million in present-day terms. No copies of this august tome have been discovered or destroyed since 1947. So why is the same book worth over $14 million today? What accounts for this tenfold increase in value?

I can think of several reasons why someone would pay so much for a book that is available to everyone for free. If there are significant deviations or marginalia between and among different copies or editions, each copy is more or less unique and thus uniquely valuable. Yet the differences among the various Bay Psalm Books are fairly well documented by this point and are not that extreme. Another reason might be personal profit or prestige. To his credit, David Rubenstein, the billionaire investor who purchased the book at yesterday’s auction, plans to loan it out to libraries around the country and to place it on deposit with a public institution. Although he may derive a good deal of personal satisfaction from this arrangement, I do not think that private gain is his primary goal. That leaves one more motive – the simple pleasure of the physical artifact.

The Early Dawn - rarer than the Bay Psalm Book and just as significant, but considerably less expensive. Courtesy of Special Collections, Yale Divinity School Library.
The Early Dawn – rarer than the Bay Psalm Book and just as significant, but considerably less expensive. Courtesy of Special Collections, Yale Divinity School Library.

Perhaps one reason why the value of the Bay Psalm Book has increased ten times over the past 60 years is that paper, photographic, and digital reproductions have increased exponentially over the same period. In an era of digital alienation, there is greater romance in the physical object. To touch, to feel, to smell, even to be in the near presence of a famous text creates a kind of living connection with history. Such documents become, as Jill Lepore writes of the United States Constitution, “a talisman held up against the uncertainties and abstractions of a meaningless, changeable, paperless age.”

This is nothing new, of course. Since the days when early Christians passed around the head of Saint Paul or the foreskin of Jesus, and probably long before that, people have always been fascinated by sacred relics. Presumably, this is why so many tourists flock to see the original Declaration of Independence or the Wright Flyer in Washington D.C. One can read almost everything there is to know about the Declaration or the Wright brothers on an iPad while waiting in line at Stop & Shop, but there is something ineffably special about being in the presence of the real thing.

Even so, what justifies such an outrageous price tag? There are almost a dozen copies of the Bay Psalm Book, all available, to some extent, to the public. And there are plenty of rare and valuable historical documents that seldom see the light of day. A few years ago, I found an 1864 edition of the Early Dawn for sale online for less than $200. Published by American abolitionists at the Mendi Mission in West Africa starting in 1861, it is a periodical that ties together the struggles against slavery and racism across two continents. It is invaluable to our understanding of global politics, history, religion, and the state of our world today. In this sense, it is just as significant as the Bay Psalm Book. It is also extremely rare. As far as I know, there is only one other extant issue from the same time period. Fortunately, I was able to convince my colleagues at the Yale Divinity School to purchase and properly preserve this one-of-a-kind artifact so that it would be available for future researchers (click the image above for a full scan of the paper). I am sure that every historian who has worked on a major project has a story similar to this. If not an online purchase, then it is a special document found in an archive, or an especially moving oral history.

There are countless unique and historically significant documents and manuscripts moldering in libraries and repositories around the world. Some of them are true gems, just waiting to be discovered. Most of them remain unavailable and unknown. And yet our society sees nothing wrong with a private citizen spending a small fortune to acquire a copy of the the Bay Psalm Book. There is no question that the venerable Old South Church deserves our support, and I have no doubt that its congregants do important work in their community and abroad. But how many lost treasures could have been brought to the world for the first time for the cost of this single public domain text? How much digitization, transcription, or innovation could $14.16 million buy?

Cross-posted at HASTAC

The Assassination of Zachary Taylor

oswald2Today marks the fiftieth anniversary of the assassination of President John F. Kennedy, and the internet and airwaves are awash in an orgy of commentaries and memorials. What can a digital humanist add to this conversation? Well, for starters, one could ask what the assassination of President Kennedy would look like in the age of social networks, smart phones, and instantaneous communication (bigbopper69: JFK shot in dallas OMG!!! 2 soon 2 no who #grassyknoll). NPR’s Today in 1963 project, which is tweeting out the events of the assassination as they occurred, day-by-day, hour-by-hour, may actually provide a good sense of what it was like to be there in real time. For those of us born decades after the fact, the deluge of digitized photos, videos, documents, and other artifacts enables a kind of full historical immersion that is not quite the same as time travel but close enough to be educationally useful.

One of the more interesting statistics to come out of this year’s commemoration is that “a clear majority of Americans (61%) still believe others besides Lee Harvey Oswald were involved” in a conspiracy to kill President Kennedy. Indeed, historical data show that a majority of Americans have suspected a conspiracy since 1963, at times reaching as high as 81 percent of respondents. This raises all sorts of interesting questions for our current moment, when rumor and misinformation spread as easily as the truth and technophiles celebrate the wisdom of the crowd while solemnly proclaiming the death of the expert. Especially after the recent revelations of unprecedented government spying, including secret courts and secret backdoors built into consumer software, Americans seem to have little reason to trust authority. So what is the role of popular knowledge in the age of digital history?

It would be easy to dismiss the various JFK assassination theories as just another example of what Richard Hofstadter called “The Paranoid Style in American Politics.” Yet to do so would ignore the important function of rumor, gossip, conspiracy theories, and other forms of popular wisdom as material forces in the shaping of our world. 1 Getting at the truth behind major events is, of course, the prime directive of all good history, digital or otherwise. A certain degree of analytical distance, strict rules of evidence, and overt argumentation are what separate professional historiography from simple nostalgia. But what counts as truth can sometimes be just as revealing as the truth itself. The alleged assassination of President Zachary Taylor is a case in point.

When Taylor, the twelfth president, died suddenly of an unidentified gastrointestinal illness just sixteen months into his first term in office, rumors spread that he had been eliminated by political rivals. Taylor’s death, in July 1850, came at a time of heightened tension between supporters and opponents of slavery. Although a slaveholder himself and the hero of an expansionist war against Mexico, Taylor took a moderate position on the slavery question and appeared to oppose its extension into the western territories. His actions may have troubled some of the more ardent southern politicians, including Senator – and future Confederate President – Jefferson Davis. Not long after his predecessor’s tragic demise, newly-minted President Millard Fillmore signed the Compromise of 1850, which had stalled under Taylor’s administration. The legislation included territorial divisions and an aggressive fugitive slave law that helped to set the stage for the looming Civil War.

I will not rehash the specific circumstances of Taylor’s illness, which is conventionally ascribed to a tainted batch of cherries and milk. Suffice it to say that the rapid and inexplicable nature of his death, which fit the profile for acute arsenic poisoning, coupled with the laughably inept state of professional medicine, left plenty of room for speculation. 2 Members of the rising antislavery coalition, soon to be called the Republican Party, were suspicious that the President had met with foul play. Nor were their suspicions limited to Taylor. Over time, the list of alleged assassination victims grew to include Andrew Jackson, William Henry Harrison, and James Buchanan, among others.

Republicans worried that Abraham Lincoln would meet a similar fate after the contentious presidential election of 1860. Even before the election, letters poured in warning the candidate about attempts to poison his food and begging him to keep a close eye on his personal staff. I counted at least fourteen warning notes in a very cursory search of the Lincoln Papers at the Library of Congress. Many of them mention President Taylor by name. “Taylor was a vigorous man, of good habits and accustomed to active life and trying duties,” wrote a supporter from Ohio, “and that he should fall a solitary victim to cholera, in a time of health, after eating a little ice cream is quite unsatisfactory.” After carefully studying the circumstances of Taylor’s death, another concluded that “the Borgias were about.” Yet another consulted a clairvoyant who warned of an active conspiracy to poison the President. In a speech responding to Lincoln’s assassination five years later, railroad magnate and women’s rights advocate George Francis Train mentioned in passing that slaveholders had “poisoned Zachary Taylor,” as if it were a matter of fact. 3

John Armor Bingham, one of the three lawyers tasked with prosecuting the Lincoln assassination conspiracy and the primary author of the fourteenth amendment to the Constitution, reportedly spent some time investigating Taylor’s death. His research, presumably conducted during or shortly after the Lincoln trial in 1865, led him to believe that Taylor had been poisoned and that Jefferson Davis had helped to precipitate the plot. 4 It is a striking claim, if true. Davis was Taylor’s son-in-law by an earlier marriage, and the two were known to be friends. Indeed Taylor uttered his final words to Davis, who stood vigil at his deathbed. Bingham also suspected that Davis was involved in Lincoln’s death, which is unlikely, though not impossible, since there is evidence to suggest that Lincoln’s assassin had contact with Confederate spies in the period leading up to the attack. 5 Whatever the case, Davis was decidedly ambivalent about the effect of the President’s removal on the flagging war effort in the South.

Although historians have shown sporadic interest in Bingham – he was an early antislavery politician and U.S. Ambassador to Japan in addition to his important legal and constitutional roles – I could find no substantial information about his investigation into a conspiracy to murder Zachary Taylor. 6 The finding aids for Bingham’s manuscripts at the Ohio Historical Society and the Library of Congress did not reveal anything related to Taylor. A superficial perusal of similar material at the Pierpont Morgan Library in New York, which holds some of Bingham’s records pertaining to the Lincoln Assassination, also failed to turn up anything significant. Still, my search was limited to document titles and finding aids and did not dig very deep into the actual content of his papers. Perhaps some enterprising digital historian could investigate further?

Uncertainty about Taylor’s death continued to smolder until the early 1990s, when an assiduous biographer managed to secure permission to exhume his body and run scientific tests on the remains. Early results showed no evidence of arsenic poisoning, though later research concluded that those results were unreliable. According to presidential assassination experts Nancy Marion and Willard Oliver, there is no definitive proof either way, and thus the ultimate cause of Taylor’s death remains a mystery. 7 While I think the evidence for natural causes is persuasive, the assorted circumstantial and physical evidence for poisoning is certainly intriguing. More intriguing still is the fact that so many contemporaries, including major political figures, were convinced that Taylor had been intentionally targeted.

The confusion surrounding Taylor’s death speaks to the awesome influence of the “Slave Power Conspiracy” that gripped the nation for much of the nineteenth century. Aspects of this conspiracy theory could be extreme, but as the historian Leonard Richards has shown in great detail, the Slave Power was a quantitative reality that could be measured in votes, laws, institutions, and individuals. 8 Although historians can debate the extent to which it was a self-conscious or internally unified collusion, thanks to the three-fifths clause, the spread of the cotton gin, and other peculiarities of antebellum development, there really was a Slave Power in early American politics. Bingham may have been overzealous when it came to the sinister machinations of Jefferson Davis, but there is no question that Davis and his ilk shared a broadly similar agenda. Popular knowledge about the death of Zachary Taylor, whatever its veracity, reflected a real concern about the grip of a small group of wealthy aristocrats over the social, economic, and political life of the country, just as theories about the death of JFK reflect a real concern about the exponential growth of the U.S. national security state.

A few days ago, Americans celebrated the 150th anniversary of the Gettysburg Address, another epochal moment in their national history. Unlike the sadness and uncertainly surrounding the JFK assassination, this was a moment of optimism and unity, typified by the filmmaker Ken Burns, who solicited readings of the Address from everyone from Uma Thurman to Bill O’Reilly, including all five extant U.S. Presidents. Lost in patriotic reverie, it is easy to lose sight of the bitter, divisive, and bloody conflict that formed the broader context for that document. It is no accident, perhaps, that the recently unmasked espionage programs developed by the United States and Great Britain were named after civil war battles – Manassas and Bullrun for the NSA, Edgehill for the GCHQ. The choice of names appears to be intentional. Both battles were pivotal moments, the first major engagements in a long and destructive war that would result in the birth of a modern nation. Likewise, these surveillance systems appear to be the first step in a prolonged global war for digital intelligence. Is this evidence of a conspiracy? Or is it yet more evidence of the extent to which conspiratorial thinking has infiltrated modern political culture – just another example of the new paranoid style?

Notes:

  1. Clare Birchall, Knowledge Goes Pop: From Conspiracy Theory to Gossip (New York: Berg, 2006); Jesse Walker, The United States of Paranoia: A Conspiracy Theory (New York: HarperCollins, 2013).
  2.  K. Jack Bauer, Zachary Taylor: Soldier, Planter, Statesman of the Old Southwest (Baton Rouge: Louisiana State University Press, 1985), 314-328; Michael Parenti, History as Mystery (San Fransisco: City Lights, 1999), 209-239; Willard Oliver and Nancy Marion, Killing the President: Assassinations, Attempts, and Rumored Attempts on U.S. Commanders-in-Chief (Santa Barbara: Praeger, 2010), 181-189.
  3. “Geo. Francis Train,” Philadelphia Inquirer, May 13, 1865.
  4. “Assassination of Presidents,” New York Times, Aug. 29, 1881.
  5. William A. Tidwell, April ’65: Confederate Covert Action in the American Civil War (Kent, OH: Kent State University Press, 1995).
  6. C. Russell Riggs, “The Ante-Bellum Career of John A. Bingham: A Case Study in the Coming of the Civil War” (PhD Thesis, New York University, 1958); Erving E. Beauregard, Bingham of the Hills: Politician and Diplomat Extraordinary (New York: P. Lang, 1989); Gerard N. Magliocca, American Founding Son: John Bingham and the Invention of the Fourteenth Amendment (New York: New York University Press, 2013).
  7. Oliver and Marion, Killing the President, 181-189.
  8. David Brion Davis, The Slave Power Conspiracy and the Paranoid Style (Baton Rouge: Louisiana State University Press, 1969); Leonard L. Richards, The Slave Power: The Free North and Southern Domination, 1780-1860 (Baton Rouge: Louisiana State University Press, 2000).