Digital Histories’ Greatest Hits

Greatest hits albums are a prickly genre. The format is simple enough: a compilation of an artist’s most popular singles, sometimes with a few rare or unreleased tracks thrown in to sweeten the deal. Many of these albums are just a cheap cash grab for big studios, an opportunity to sell the same song twice. The invention of the mixtape provided the first challenge to this model. In the age of iTunes and Google Music, why not just build your own playlist? There is also something profoundly offensive about reducing an entire career to a handful of catchy tracks. How would one create a greatest hits album for Beethoven? Or Neal Morse, for that matter? Some of his best songs are half an hour long. Would Danzig’s greatest hits be his entire discography?
Danzig’s Greatest Hit?

On the other hand, if it is done well, a “best of” album can offer an entry point into a broader musical world. Neil Young’s Decade and Queen’s Greatest Hits are often ranked among the best albums of the twentieth century. If artists are given the opportunity to remaster or remix old material, the result can be fresh and enlightening. Although it is rare, academic historians sometimes publish career retrospectives of their work. No one will compare them to the genius of Freddie Mercury. But they have their moments. Jill Lepore’s The Story of America and Marcus Rediker’s Outlaws of the Atlantic are good recent examples. Bloggers sometimes include a “best of” category to help introduce readers to their oeuvre. And, as I wind down my work on this site and begin the transition to a new phase in my career, I thought I might do the same for Digital Histories.

Over the past five years, I have published several dozen articles on this site, from historical and pedagogical essays to website reviews and technical guides. Printed together as a single document, they add up to about 130 pages. I was a little late adding Google Analytics into the mix, but I still have a pretty decent picture of what articles readers find most useful. So I compiled a “Top Ten” list as a separate page. I also added a link to the page at the top level of the site, where new visitors can easily find it. These are not necessarily the most important articles on the site, or even the most representative, but simply the ones that have attracted the highest number of unique page views according to Google.

Surprisingly, articles cross-posted on other blogs did not top the list. Some articles with inbound links from other websites did not even make the top ten. Twitter, Facebook, and other social media only account for about 10% of inbound traffic. Simple keyword searches (called “organic searches” in SEO lingo) make up most of the rest. So articles with popular search terms, such as “derrida” or “course management” or “slavery footprint,” tend to rank higher on the list. The posts on WordPress as a course management system and Elihu Yale as a slave trader were breakout hits. But others I consider significant did not even rate. My essay on teaching with runaway advertisements, which was included on HASTAC’s Pedagogy Project and generated some interesting conversations, did not make the top ten. Perhaps that is because most folks are reading it on HASTAC. As a rule, newer posts tend to have more page views than older ones, which may indicate that I have improved my blogging skills over the past several years. Or maybe it means that nobody clicks on older stuff. Readers of online content tend to follow the shiny penny and then quickly move on to something else.

What are some of your favorite “greatest hits” compilations? Is this a worthwhile subject for digital scholarship? How do we curate the best work? Should we even care which of our projects attract the most hits or have the most likes? Is not trying weird or unpopular things an important part of our mission? We simply do not know what will be important or impactful five or ten or twenty years from now. If we focus only on what is trending this minute, do we lose sight of what might be useful in the future?

The Secret History of the Severed Hands

Photo Credit: Andrew Norman Wilson
Photo Credit: Andrew Norman Wilson

Walter Isaacson is enamored of great men. As an author, he has published popular biographies of Benjamin Franklin, Albert Einstein, Henry Kissinger, and Steve Jobs (the latter sold 379,000 copies in its first week). As President of the elite Aspen Institute, he arranges for great and powerful men to network with other great and powerful men. As the managing editor of Time magazine and the CEO of CNN, he has reached for the heights of great man-dom himself. One of his books is simply called The Wise Men. He even edited a collection on what he calls “the Elusive Quality of Greatness.” This makes his latest study all the more surprising. Called The Innovators, it is anchored by a demure Victorian woman who was largely ignored during her short life and, for about 150 years thereafter, considered not really that great. A kind of group portrait of the digital revolution, the book begins and ends with Ada Lovelace.

The daughter of Lord Byron, socially privileged, chronically ill, and problematically married, Lovelace was the quintessential Victorian aristocrat. She was also a mathematical prodigy who envisioned the first computer program in a breathtaking work of theorizing tucked away at the bottom of an English translation of an Italian article about the work of someone else. When I screened a documentary about the Countess of Lovelace in my Digital History course this year, not a single student knew her name. By the end of the class period, all of them agreed that she was one of the most significant figures in the history of computing. (One of my students cited her as inspiration to write her senior thesis on the digital gender divide and recent efforts to bridge it.)

Isaacson is hardly the first to notice Ada Lovelace. That she occupies such a prominent place in his narrative is due to years of work by generations of scholars who have written articles and news stories, produced films and biographies, and founded an international Ada Lovelace Day. Isaacson does not shy away from her influence, and he connects her story to that of Grace Hopper, the Yale-educated mathematician who became one of the first and most important software developers. Like Lovelace, from whom she took inspiration, Hopper was overshadowed by her male collaborators. But she is slowly attracting more attention and now has her own annual conference. Together, these two privileged white women stand athwart the group of privileged white males who occupy the majority of Isaacson’s book.

The attention paid to Lovelace and Hopper is laudable. Still, they seem like simply another addition to the laundry list of great men. As Roy Rosenzweig pointed out long ago, the history of the digital age is just as much about the military-industrial complex and the Cold War and economic globalization as it is about eccentric geniuses toiling in obscurity. Focusing on the great men (or great women or great queer folk) can render invisible all of the not-so-great labor that birthed and midwifed the digital revolution. Much of this work was done by women. Women literally were the first computers. So this begs the question: what would a feminist history of computing look like?

hand2To me, it looks like a severed hand. All of us who use Google Books on a regular basis have seen them at some point, floating, disembodied, anonymous, usually feminine or non-white. People have been blogging about the hands for years. A Google worker famously lost his job after trying to film the bodies attached to them. The hands are printed in art books and compiled on Tumblr. Many of them evoke themes of race and class. Or, as the New Yorker put it, “a brown hand resting on a page of a beautiful old book.” The perceived disparity between the brown and the beautiful speaks volumes. Sometimes the hands communicate subtle messages. Consider the bejeweled hand pointing to a chapter in a biography of Napoleon. Or get lost in the psychedelic mystery that is A Serious Call to the Christian World, authored by “Jews.” One of my personal favorites is a small appendage with long nails and neon purple finger-condoms grasping the title page of Adam Smith’s Wealth of Nations. Who says there is no poetry in the everyday? In an instant, “the invisible hand” of the market is laid bare. These glitches are the material traces of the workers who actually power the digital revolution. The severed hand is the embodiment of a history.

Rooting through some early microfilm, I came across more hands. Black-and-white, grainy, but unmistakably female, complete with wedding rings and painted nails. Looking at another microfilm reel from the early 1940s, I saw yet more hands, also ringed, also female, and I realized that this had been going on for a long time. Like the women who programmed the first computers, their work was both invisible and glaring, ordinary and extraordinary. The same technology that was attempting to erase this manual labor was making it ever more visible. Here was an army of Ada Lovelaces, working in secret, processing material, cataloging, inventing, filming, scanning, and providing the essential groundwork and infrastructure for all that showy greatness. Although perhaps not of the Isaacsonian ilk, they quietly demand our attention.

Cross-posted at HASTAC

Elihu Yale was a Slave Trader

anonslaveNext week, the Gilder Lehrman Center for the Study of Slavery, Resistance, and Abolition and the Yale Center for British Art are co-hosting a major international conference on slavery and British culture in the eighteenth century. The art exhibit associated with the conference is remarkable for many reasons, not least because it features a portrait of Elihu Yale being waited upon by a collared slave (euphemized as a “page” in the original listing). The painting is related to one held by the University Art Gallery, showing the same scene from a different perspective. And it is similar to another portrait of Yale with yet another collared slave (this time euphemized as a “servant”). This latter portrait, even more ominous and imperial than the first, is not a part of the exhibit. And that is a shame, because these paintings, and the larger conference of which they are a part, offer an opportunity to revisit the controversial and entangled history of slavery and universities.

Historians have long pointed out that Yale (the University) is deeply implicated in the institution of slavery. Many of its prominent buildings are named after slaveholders or slavery apologists. It housed so many southern students that it briefly seceded from the Union at the start of the Civil War. 1 Craig Wilder’s wonderful book Ebony & Ivy, published last year, shows that Yale is not alone in this regard. All of early America’s leading universities, both north and south, promoted and profited from slavery, racism, and colonialism. 2 At the same time, college campuses were battlegrounds where antislavery students and faculty engaged in dramatic confrontations with their opponents and developed new political movements. 3 Oddly enough, none of the scholarship on these issues mentions that Elihu Yale, the namesake of this august and venerable institution, was himself an active and successful slave trader.

As an official for the East India Company in Madras (present-day Chennai), Yale presided over an important node of the Indian Ocean slave trade. Much larger in duration and scope than its Atlantic counterpart, the Indian Ocean trade linked southeast Asia with the Middle East, the Indonesian archipelago, and the African littoral. On the subcontinent, it connected with and drew upon traditions of slavery and servitude that had flourished for generations. 4 In the 1680s, when Yale served on the governing council at Fort St. George on the Madras coast, a devastating famine led to an uptick in the local slave trade. As more and more bodies became available on the open market, Yale and other company officials took advantage of the labor surplus, buying hundreds of slaves and shipping them to the English colony on Saint Helena. Yale participated in a meeting that ordered a minimum of ten slaves sent on every outbound European ship. 5 In just one month in 1687, Fort St. George exported at least 665 individuals. 6 As governor and president of the Madras settlement, Yale enforced the ten-slaves-per-vessel rule. On two separate occasions, he sentenced “black Criminalls” accused of burglary to suffer whipping, branding, and foreign enslavement. 7 Although he probably did not own any of these people – the majority were held as the property of the East India Company – he certainly profited both directly and indirectly from their sale.

Some sources (including Wikipedia) portray Elihu Yale as an heroic abolitionist, almost single-handedly ending the slave trade in Madras. 8 This is incredibly misleading. During his tenure as governor, Yale made an effort to curb the stealing of children and others for the purpose of export. But a close reading of company documents reveals that it was anything but an act of humanitarian altruism. It was, in fact, the local Mughal government, which held more power than the tenuous English merchants, that insisted on abolition. Yale’s decree of May 1688 curbing the transport of slaves from Madras argued that the trade had become more trouble than it was worth. The surfeit of slaves from the previous year’s famine had dried up, and the indigenous government had “brought great complaints & troubles…for the loss of their Children & Servants Sperited and Stoln from them.” 9 With no profit left for the company and a hostile Mughal overlord demanding abolition, Yale was happy to comply.

Only one year later, in October 1689, Yale had no problem issuing orders for a company ship to travel to Madagascar, buy slaves, and transport them to the English colony on Sumatra. When they arrived by the hundreds, these unfortunate individuals were put to work as masons, carpenters, smiths, cooks, maids, gardeners, and porters. A select few even served as soldiers. In addition to free labor, they provided a strategic buffer against European rivals and further consolidated the company’s political and economic power. 10 African slaves in India and Indonesia, Indian slaves on St. Helena, rival empires jostling for control – the Indian Ocean trade was a complicated and convoluted melange. And Elihu Yale was right in the thick of it, directing it, turning it to his own advantage, and growing fat and rich from its spoils. This wealth, in the form of diamonds, textiles, and other luxury goods, enticed the founders of Yale College to pursue the famous merchant and to name their school in his honor. 11

Apologists might counter that Yale was a man of his time. Slavery was impossible to avoid, nobody opposed it, and most rich and successful people had a hand in it. None of that is true. In April 1688, less than a year after Yale became governor of Madras, a group of Quakers in Germantown, Pennsylvania, issued a statement condemning slavery in the colony: “There is a saying that we shall doe to all men licke as we will be done ourselves; macking no difference of what generation, descent or Colour they are. and those who steal or robb men, and those who buy or purchase them, are they not all alicke?” Quakers shed their ties to slavery during the eighteenth century while building a reputation as profitable and successful merchants. And they were hardly the only ones to protest the institution. In 1712, a major slave rebellion erupted in New York City, in which at least nine Europeans and twenty-seven Africans lost their lives. Several years later, when Yale College took its present name, opposition to slavery was endemic across the British Empire. 12 This was the broader world in which Elihu Yale worked, schemed, and built his fortune.

The evidence establishing Yale’s involvement in the slave trade is clear and compelling. Thanks to the Internet Archive, HathiTrust, and Duke University, almost all of the official records of Fort St. George are available online, and even more documents await future researchers. Those looking for further information can follow my footnotes. Hopefully other scholars will build on this record to paint a more complete picture of the stoic British gentleman and his dark, diminutive servants, forever bound together in those disturbing oil portraits.



  1. Antony Dugdale, J.J. Fueser, and J. Celso de Castro Alves, Yale, Slavery and Abolition (New Haven: The Amistad Committee, 2001); Frank Leslie’s Illustrated Newspaper, Feb. 2, 1861. See also
  2. Craig Steven Wilder, Ebony and Ivy: Race, Slavery, and the Troubled History of America’s Universities (New York: Bloomsbury Press, 2013). See also
  3. Wilder touches on this subject briefly in his final chapter, and I have been working on an article that will (hopefully) expand the narrative.
  4. Gwyn Campbell (ed.), The Structure of Slavery in Indian Ocean Africa and Asia (Portland, OR: Frank Cass, 2004); Indrani Chatterjee and Richard M. Eaton (eds.), Slavery and South Asian History (Bloomington: Indiana University Press, 2006); Richard B. Allen, European Slave Trading in the Indian Ocean, 1500–1850 (Athens, OH: Ohio University Press, 2015).
  5. Records of Fort St. George: Diary and Consultation Book of 1686 (Madras: Superintendent Government Press, 1913), 48; Records of Fort St. George: Diary and Consultation Book of 1687 (Madras: Superintendent Government Press, 1916), 8.
  6. Henry Davison Love, Vestiges of Old Madras, 1640-1800: Traced from the East India Company’s Records Preserved at Fort St. George and the India Office, and from Other Sources, vol. 1 (London: John Murray, 1913), 545.
  7. Records of Fort St. George: Diary and Consultation Book of 1688 (Madras: Superintendent Government Press, 1916), 30, 137; Records of Fort St. George: Diary and Consultation Book of 1689 (Madras: Superintendent Government Press, 1916), 99.
  8. See, for example, Hiram Bingham, Elihu Yale: The American Nabob of Queen Square (New York: Dodd, Mead & Company, 1939), 167.
  9. Diary and Consultation Book of 1688, 19, 78-79.
  10. Records of Fort St. George: Letters from Fort St. George for 1689 (Madras: Superintendent Government Press, 1916), 58-59; Records of Fort St. George: Letters from Fort St. George for 1693-94 (Madras: Superintendent Government Press, 1921), 12. On imperial rivalry, especially as it developed over the next century, see Andrea Major, Slavery, Abolitionism and Empire in India, 1772-1843 (Liverpool: Liverpool University Press, 2012), 49-84.
  11. Gauri Viswanathan, “The Naming of Yale College: British Imperialism and American Higher Education,” in Cultures of United States Imperialism, ed. Amy Kaplan and Donald E. Pease (Durham: Duke University Press, 1993), 85-108.
  12. Maurice Jackson, Let This Voice Be Heard: Anthony Benezet, Father of Atlantic Abolitionism (Philadelphia: University of Pennsylvania Press, 2009); Kenneth Scott, “The Slave Insurrection in New York in 1712,” New-York Historical Society Quarterly, 45 (Jan. 1961), 43-74; Peter Linebaugh and Marcus Rediker, The Many-Headed Hydra: Sailors, Slaves, Commoners, and the Hidden History of the Revolutionary Atlantic (Boston: Beacon Press, 2000).

Digital Humanities as a Universal Language

One of the nice things about the meta-discipline of the digital humanities is that it’s also an international movement. As Carol pointed out in her post on the global digital divide, this is not an easy feat to accomplish. The asymmetrical shape of economic development over the past several centuries has influenced the technological backbone that connects different parts of the world. As a result, digital humanities work tends to mirror the starkly divided, core-periphery dynamic of contemporary globalization. Marginalized regions remain bit players, while the wealthiest countries retain their gravitational pull as the center of life for the academic elite. At the same time, though, I have noticed a sharp increase in the global consciousness and outreach among the digerati. Recent and forthcoming conferences in Australia, England, Canada, Germany, and Switzerland offer proof that the digital humanities need not be conducted along narrow nationalist lines. And the historic HASTAC conference held this year in Lima, Peru, proves that this work does not have to be exclusively Anglo or Eurocentric, either. Around DH in 80 Days, which maps a select number of projects globally, is one of the best introductions to this emerging field.


The translatability of the digital humanities, its broad and easy appeal across conventional boundaries of ethnicity, class, culture, and nation, is one of its most amazing features. At its most basic level, it functions as a kind of universal language, like HTML or mathematics or heavy metal music. Little kids can do it. Your grandmother can do it. Able-bodied people can do it. Disabled people can do it. Privileged people can do it. Oppressed and marginalized people can do it. Even birds and bees do it. Although there is still a long way to go, I think the digital humanities hold the potential to accomplish that magnificent thing to which the traditional humanities have always aspired, but rarely achieved – a truly comprehensive and inclusive representation of humanity.

As one small contribution to this project, I will offer a completely shameless plug for a one-day conference at Paris Diderot University (Paris 7) in October. Focused on recent digital history projects, the event will bring together practitioners and researchers from the United States and France (and maybe elsewhere) to initiate a dialog. I will present on some of my experiences connecting research and teaching, with a special focus on my experimental digital history course and Constance Schulz, Professor Emerita at the University of South Carolina, will present on her NEH-funded scholarly editing project about two remarkable early American women. Additional details, including a map and schedule, are available here. I think it will be a wonderful opportunity to grow digital history work internationally, and if you happen to be in the area, I hope you will attend.

What is Digital History?

Chalkboard - What is Digital HistoryOne thing professional scholars everywhere love to do is to categorize, define, and explain, to erect borders and boundaries and partitions. There is a good reason the word “discipline” is at the heart of the academic industrial complex. It is discipline in both the good, self-control, zen sense and the bad, Michel Foucault sense of the word. There has been much debate over the past several years about whether Digital Humanities, and its subset Digital History, constitutes its own discipline, or whether it is fundamentally trans-disciplinary at its core. And what academic discipline worthy of the name is not essentially trans-disciplinary, anyway? Try as we might to impose categories on living reality, to sort into neat boxes of genus, species, and phylum, reality is not that static. It is constantly evolving, always in motion, always transitioning from one thing to the next.

There are a great many extremely interesting documents and manifestos floating around the web attempting to draw boundaries around the digital humanities, to tie it down, to reign it in and discipline it (in the Foucauldian sense). Jason Heppler’s approach to this problem, which presents a different definition each time the page is refreshed, continually remixing them into infinite combinations, is one of the best I have seen. Digital History, like the Digital Humanities, is a broad camp, capable of accommodating everything from the whimsical Serendip-o-matic to the brutal historiographical battles erupting on the back end of prominent Wikipedia pages. The Promise of Digital History, a Journal of American History roundtable discussion from way back in 2008, is a fair introduction to this particular genre of the digital. A breakdown of the document using Voyant reveals, among other things, a strong emphasis on open access. Together, these two words appear a total 97 times. Ironically, and perhaps appropriately, the exchange itself is a daunting and hopelessly difficult-to-digest wall of text.

At the heart of this definitional battle is a fundamental status anxiety. Is Digital History just regular old history plus expensive computers? Is it, as Adam Kirsch argues about digital literary studies, just “fancy reiterations of conventional wisdom?” Or does it represent something new and qualitatively different? When I posed this question to my students this year, it produced some fascinating results.

Being one of those definition-obsessed academics, I always ask my students to unpack what may seem like everyday or familiar terms. What is freedom? What is slavery? What is civil war? What is Africa? What is America? So when tasked with teaching Digital History to a group of undergraduates, I naturally asked them to define what exactly that means. Actually, I first asked them to define History proper, and then we tried to figure out what makes it so different when done digitally. Of course, we were not alone in this endeavor. It is deeply interesting to observe different classes in different parts of the country generate different responses to similar prompts. Our answers, some of which you can see if you click on the chalkboard above, ranged from Cervantes and Foucault to the practical and the public. I suspect that if I had asked my students at the end of the class, after they submitted their final project, they would have added that Digital History is also really hard work. It requires discipline.