All posts by Joseph Yannielli

About Joseph Yannielli

I study the history of slavery and abolition, with a special focus on the United States, West Africa, and the wider world during the nineteenth century. I began this site as a graduate student in the Department of History at Yale University. I have participated in discussions around the burgeoning field of Digital Humanities, and I use technology to enhance my research and teaching. I have also served as manager and lead developer for a few projects, such as the Yale Slavery and Abolition Portal and RunawayCT.

Archival Fragment of the Amistad Revolt

Sometimes the best cure for archive fever is to share it with the world.

“Pa Raymond,” Sierra Leone Mission Album, box 2, p. 122, Records of the United Brethren in Christ Foreign Missionary Society, United Methodist Archives, Drew University.
I was reminded of the mundane joys of the archive again several months ago when, thanks to a tip from a colleague, I located an extremely rare photograph of one of the survivors of the Amistad slave revolt in the United Methodist Archives in New Jersey. It is difficult to tell whether the old man, called “Pa Raymond” on the reverse of the photo, is the real deal, but circumstantial evidence suggests that he might be Kale Walu, or “Little Kale,” who was just a boy when he was abducted and enslaved in West Africa in 1839. Kale (also spelled Kali or Carly) was the author of the famous “crazy dolts” letter, addressed to John Quincy Adams on the eve of their trial in the United States Supreme Court. He assumed the name George Lewis when he returned to Africa in 1842, part of an ongoing project to reinvent former slaves as anglicized Christians. As one of the youngest among the returning group, he was something of a surrogate son for abolitionist missionary William Raymond and may have taken his surname later in life. Pa is Krio for “father,” an honorific title for village elders.

The photo was probably taken sometime in the early 20th century by the United Brethren in Christ, who had inherited an abolitionist outpost, called the “Mendi Mission,” in what is now southwestern Sierra Leone. Almost all of the photos in the collection date from after the rebellion of 1898. When Canadian missionary Alexander Banfield encountered a man claiming to be an Amistad veteran during a tour of Sierra Leone in 1917 (likely the same man in this photo), he estimated the man was about 100 years old. Although my work does not really focus on the Amistad captives (I’m interested in the larger story of American abolitionists in Africa), it is bracing to look into the eyes of this man. Sole survivor. Adopted son of the missionary, traveling barefoot through the bush. White-haired patriarch, holding something mysterious with his right hand. What have those eyes seen? Where are they looking now?

Thanks to the generous (and underpaid and understaffed) archivists in New Jersey and the embattled public domain laws of the United States, I am able to share this treasure with the world (I think) for the first time. It belongs to the world. I am just returning it.

Mal d’Archive

You know you’re a pretentious academic blogger when you start titling your posts in French, and if you can quote one of the most notoriously abstruse French philosophers at the same time, well that’s just a bonus. Jacques Derrida is not much in style these days (if he ever was). His ideas, and especially his prose, have been the butt of many jokes over the past half-century, but his 1994 lecture series Mal d’Archive (later published and translated as “Archive Fever“) is a significant artifact of the early days of the digital revolution. Although I don’t quite agree with everything its author says, the book makes an earnest attempt to grapple with the intersection of technology and memory and offers some worthwhile insight.

An archivist works feverishly.

The idiomatic en mal de does not have a direct analogue in English, but for Derrida it means both a sickness and “to burn with a passion.” It is an aching, a compulsive drive (in the Freudian sense) to “return to the origin.” It is the sort of fever rhapsodized by Peggy Lee, the kind of  unquenchable desire that can only be remedied by more cowbell. Whatever Derrida means by archive fever (and I think he leaves its precise meaning deliberately ambiguous), it is a concept that has some resonance for historians. As a profession, we tend to privilege primary sources, or archival documents, over secondary sources, or longer works that analyze and interpret an archive. Yet even the most rudimentary archival fragment contains within it a narrative, a story, an argument. Every document is aspirational; every archive is also an interpretation. There is no such thing as a primary source. There are only secondary sources. We build our histories based on other histories. The archive, Derrida reminds us, is forever expanding and revising, preserving some things and excluding others. The archive, as both subject and object of interpretation, is always open-ended, it is “never closed.”

Of course, in a few weeks, in what can only be described as a stunning disregard for French philosophy, the Georgia State Archives will literally shut its doors. Citing budget cuts, the state announced it will close its archives to the public and restrict access to special appointments (and those appointments will be “limited” due to layoffs). For now, researchers can access a number of collections through the state’s Virtual Vault, but it is not clear whether more material will be added in the future. The closure comes at the behest of governor Nathan Deal, whose recent political career has been beset by ethics violations. The cutbacks are the latest in a string of controversial decisions by the Georgia governor, including the rejection of billions of dollars in medicare funds and a $30 million tax break for Delta Airlines, and will have a negative impact on government transparency. Coming on the heels of the ban on ethnic studies in Arizona, the campaign against “critical thinking” in Texas, attacks on teachers in Illinois and Wisconsin, and deep cuts in public support for higher education across the country, the news from Georgia seems a portent of dark times.

Archives are so essential to our understanding of the past, and our memory of the past is so important to our identity, that it can feel as if we have lost a little part of ourselves when one is suddenly closed, restricted, or destroyed. Historian Leslie Harris calls public archives “the hallmarks of civilization.” Although I don’t entirely agree (are groups that privilege oral tradition uncultivated barbarians?), Harris points to a fundamental truth. The archive is an integral component of a society’s self-perception. Without open access to archival collections, who could corroborate accusations that the government was conducting racist medical experiments? Who would discover the lost masterpiece of a brilliant author? Who would provide the census data to revise wartime death tolls? Who would locate the final key to unlock the gates of Hell? All of the boom and bluster about digitization and the democratization of knowledge notwithstanding, it is easy to forget that archival work is a material process. It takes place in actual physical locations and requires real workers. What does it mean for the vaunted Age of Information when states restrict or close access to public repositories?

However troubling the news from Georgia, all hope is not lost. This is not the end of days. Knowledge workers are fighting to preserve access to the archive. At the same time, efforts by historians to crowdsource the past offer a fascinating and potentially momentous expansion of archive fever. Several high profile projects are now underway to enlist “citizen archivists” to help build, organize, and transcribe documentary collections. Programmers at the always-innovative Roy Rosenzweig Center for History and New Media have just released a “community transcription tool” that will (hopefully) streamline the process of collaborative archiving, transcribing, and tagging across platforms. The potential for public engagement and the production of new knowledge is stupendous. Because they rely on the same volunteer ethos as Wikipedia, however, it is likely that part-time hobbyists will be more interested in parsing obscure Civil War missives than the correspondence of Jeremy Bentham. A citizen archivist with a passion for Iroquois genealogy might have little interest in, let’s say, the municipal records of East St. Louis. And this is precisely where major repositories and their well-trained staff can help supervise, guide, and even lead the public. What if every historian could upload all of their primary sources to a central repository when they finished a project? What if there was a universal queue where researchers could submit manuscripts for public transcription, along the lines of the now-ubiquitous reCAPTCHA service? Perhaps administrators could implement some sort of badge or other incentive program in exchange for transcribing important material? As all manner of documents are digitized, uploaded, and transcribed in a lopsided, haphazard, and ad-hoc fashion, in vastly disparate quality, in myriad formats, in myriad locations, physical archives and their staff are needed more than ever – if only to help level the playing field. Among the most important functions of the professional archivist is to remind us that there is much that is not yet online.

Note recording the arrival of the Amistad survivors in Freetown, Sierra Leone, Jan. 1842. Liberated African Register, Sierra Leone Public Archives, Freetown.

One of the best experiences I’ve ever had as a researcher was in the national archives of Sierra Leone. Despite a century and a half of colonialism, a decades-long civil war, and other challenges that come with occupying a bottom rung on the global development index, the collections remain open to the public and continue to grow and improve. They have even started to go digital thanks to some help from the British Library and the Harriet Tubman Resource Centre. Sitting in the Sierra Leone archives, with its maggot-bitten manuscripts, holes in the windows, and sweltering heat, suddenly the much-discussed global digital divide seems very real. Peering out of the window one day, as I did, to see a mass of students drumming and chanting, then chased by soldiers in riot gear, the screams from the crowd as you shield yourself from gun fire behind a bookshelf thick with papers, it is difficult to look at knowledge work the same way again. When I enter a private archive in the United States, with its marbled columns and leather chairs, its rows of computers and sophisticated security cameras, I am grateful and angry – grateful that this is offered to some, angry that it is denied to others. The archivists and their support team in Freetown are heroes. Full stop. I worry about them when I read about the conflict in Libya, which continues to spill across borders and has led indirectly to the destruction of priceless archives and religious monuments in Mali.

Compared to the situation in West Africa, the more modest efforts to preserve and teach the past across the United States seem like frivolous first world problems. On the other hand, all information is precious. Whether physical or digital, access to our shared heritage should not be held hostage to political agendas or economic ultimatums. Archives are a right, not a privilege. I like to think that Derrida, who grew up under a North African colonial regime, would appreciate this. If Sierra Leone can keep its archives open to the public, why can’t the state of Georgia?

Cross-posted at HASTAC

Ahead in the Clouds

The Chronicle published a lengthy review article last week on the science of brain mapping. The article focuses on Ken Hayworth, a researcher at Harvard who specializes in the study of neural networks (called connectomes). Hayworth believes, among other things, that we will one day be able to upload and replicate an individual human consciousness on a computer. It sounds like a great film plot. Certainly, it speaks to our ever-evolving obsession with our own mortality. Whatever the value of Hayworth’s prediction, many of us are already storing our consciousness on our computers. We take notes, download source material, write drafts, save bookmarks, edit content, post blogs and tweets and status updates. No doubt the amount of our intellectual life that unfolds in front of a screen varies greatly from person to person. But there are probably not too many modern writers like David McCullough, who spends most of his time clacking away on an antique typewriter in his backyard shed.

Although I still wade through stacks of papers and books and handwritten notes, the vast majority of my academic work lives on my computer, and that can be a scary prospect. I have heard horror stories of researchers who lose years of diligent work in the blink of an eye. I use Carbon Copy Cloner to mirror all of my data to an external hard drive next to my desk. Others might prefer Time Machine (for Macs) or Backup and Restore (for Windows). But what if I lose both my computer and my backup? Enter the wide world of cloud storage. Although it may be some time before we can backup our entire neural net on the cloud, it is now fairly easy to mirror the complicated webs of source material, notes, and drafts that live on our computers. Services like Dropbox, Google Drive, SpiderOak, and SugarSync offer between 2 and 5 GB of free space and various options for syncing local files to the cloud and across multiple computers and mobile devices. Most include the ability to share and collaborate on documents, which can be useful in classroom and research environments.

These free services work great for everyday purposes, but longer research projects require more space and organizational sophistication. The collection of over 10,000 manuscript letters at the heart of my dissertation, which I spent three years digitizing, organizing, categorizing, and annotating, consume about 30 GB. Not to mention the reams of digital photos, pdfs, and tiffs spread across dozens of project folders. It is not uncommon these days to pop into a library or an archive and snap several gigs of photos in a few hours. Whether this kind of speed-research is a boon or a curse is subject to debate. In any event, although they impose certain limits, ADrive, MediaFire, and Box (under a special promotion) offer 50 GB of free space in the cloud. Symform offers up to 200 GB if you contribute to their peer-to-peer network, but their interface is not ideal and when I gave the program a test drive it ate up almost 90% of my bandwidth. If you are willing to pay an ongoing monthly fee, there are countless options, including JustCloud‘s unlimited backup. I decided to take advantage of the Box deal to backup my various research projects, and since the process was far from straightforward, I thought I would share my solution with the world (or add it to the universal hive mind).

Below are the steps I used to hack together a free, cloud-synced backup of my research.  Although this process is designed to sync academic work, it could be modified to mirror other material or even your entire operating system (more or less). While these instructions are aimed at Mac users, the general principles should remain the same across platforms. I can make no promises regarding the security or longevity of material stored in the cloud. Although most services tout 256 bit SSL encryption, vulnerabilities are inevitable and the ephemeral nature of the online market makes it difficult to predict how long you will have access to your files. The proprietary structure of the cloud and government policing efforts are critical issues that deserve more attention. Finally, I want to reiterate that this process is for those looking to backup a fairly large amount of material. For backups under 5 GB, it is far easier to use one of the free synching services mentioned above.

Step 1: Signup for Box (or another service that offers more than a few GB of cloud storage). I took advantage of a limited-time promotion for Android users and scored 50 GB of free space.

Step 2: Make sure you can WebDAV into your account. From the Mac Finder, click Go –> Connect to Sever (or hit command-k). Enter “https://www.box.com/dav” as the server address. When prompted, enter the e-mail address and password that you chose when you setup your Box account. Your root directory should mount on the desktop as a network drive. Not all services offer WebDAV access, so your mileage may vary.

Step 3: Install Transmit (or a similar client that allows synced uploads). The full version costs $34, which may be worth it if you decide you want to continue using this method. Create a favorite for your account and make sure it works. The protocol should be WebDAV HTTPS (port 443), the server should be www.box.com, and the remote path should be /dav. Since Box imposes a 100 MB limit for a single file, I also created a rule that excludes all files larger than 100 MB. Click Transmit –> Preferences –> Rules to establish what files to skip. Since only a few of my research documents exceeded 100 MB, I was fine depositing these with another free cloud server. I realize not everyone will be comfortable with this.

Step 4: Launch Automator and compile a script to run an upload through Transmit. Select “iCal Alarm” as your template and find the Transmit actions. Select the action named “Synchronize” and drag it to the right. You should now be able to enter your upload parameters. Select the favorite you created in Step 3 and add any rules that are necessary. Select “delete orphaned destination items” to ensure an accurate mirror of your local file structure, but make sure the Local Path and the Remote Path point to the same place. Otherwise, the script will overwrite the remote folder to match the local folder and create a mess. I also recommend disabling the option to “determine server time offset automatically.”

Step 5: Save your alarm. This will generate a new event in iCal, in your Automator calendar (if you don’t have a calendar for automated tasks, the system should create one for you). Double-click the event to modify the timing. Set repeat to “every day” and adjust the alarm time to something innocuous, like 4am. Click “Done” and you should be all set.

Automator will launch Transmit every day at your appointed time and run a synchronization on the folder containing your research. The first time it runs, it should replicate the entire structure and contents of your folder. On subsequent occasions, it should only update those files that have been modified since the last sync. There is a lot that can go wrong with this particular workflow, and I did not include every contingency here, so please feel free to chime in if you think I’ve left out something important.

If, like me, you are a Unix nerd at heart, you can write a shell script to replicate most of this using something like cadaver or mount_webdavrsync, and cron. I might post some more technical instructions later, but I thought I should start out with basic point-and-click. If you have any comments or suggestions – other cloud servers, different process, different outcomes – please feel free to share them.

UPDATE: Konrad Lawson over at ProfHacker has posted a succinct guide to scripting rsync on Mac OS X. It’s probably better than anything I could come up with, so if you’re looking for a more robust solution and you’re not afraid of the command line, you should check it out.

Cross-posted at HASTAC

Eternal Sunshine of the Spotless Draft

I am an inveterate Mac user. Some might say I’m a fanboy. Although I like to think that my brand loyalty is due to a cleaner, easier, more pleasing operating experience, there are other factors. Part of my attraction stems from the “Think Different” ad campaign of my youth – flattering for any impulsive iconoclast. Or maybe it’s that soothing chime. I don’t agree with everything Apple has ever done, especially now that they’ve thundered into the mainstream, but I still think that, when all is said and done, they can produce a better quality product than the competition (now if only they could do it humanely). Apple devices are marketed as polished, eloquent, intuitive. A common complaint about Microsoft, on the other hand, is that they have trouble releasing a finished product. Windows is notorious for being incomplete, buggy, awkward, in need of an endless cascade of updates and service packs. Of course, Mac OS X, Linux, Android, and every other decent piece of software does exactly the same thing. OS X has endured at least seven major revisions in the past decade, while Windows has suffered maybe three (it all depends on your definition of “major revision”). This endless turnover used to bother me. Does Firefox really need to release a new version every other day? How much useless bloat can software designers cram into MS Word before it finally explodes? Lately, however, I’ve come to accept and even embrace this radical incompleteness.

The age of static print was defined by permanence. Authors and editors had to work for a long time on multiple drafts, revisions, and proofs. The result was a clay tablet, or a scroll, or a codex book. With the onset of the printing press, it was easier to make corrections. Movable type could be reset and rearranged to create appended, expanded, and revised editions. Still, the emphasis was on stability. The paperback book I have on my desk right now looks pretty much exactly the same as it did when it was first published in 1987. And it will always look that way. A lot of effort went into its publication because it would be extremely difficult to revise it. It is a stable artifact. Digital culture, on the other hand, is a permanent palimpsest. What is here today is gone tomorrow, all that is solid melts into air. Digital publications do not have to be fully polished artifacts because they can be endlessly revised. There are benefits and drawbacks to this state of almost limitless transition. But now that the Encyclopedia Britannica has thrown up its hands and shuttered its print division, perhaps it is worth asking: what do we have to gain from adhering to a culture of permanence?

In the world of static print, errors or inaccuracies are irreversible. Filtration systems, such as line editing or peer review, help to mitigate against this problem, but even the most perfectionist among us are not immune from good faith mistakes. We have all had those moments when we come across a typo or an inelegant phrase that makes us cringe with regret. How wonderful would it be to correct it in an instant? And why stop at typos? Less than a year after I published an article on abolitionist convict George Thompson, I was wandering around in the vast annex where my school’s library dumps all of its old reference books. Here were hoary relics like the National Union Catalog or the Encyclopedia of the Papacy. I picked up a dusty tome and, by dumb luck, found an allusion to Thompson’s long-lost manuscript autobiography. When I wrote the article I had scoured every database known to man over the course of two years, including WorldCat and ArchiveGrid. But the manuscript, which was filed away in some godforsaken corner of the Chicago History Museum, had no corresponding  entry in any online catalog. I had to e-mail the museum staff and wait while a kindly librarian checked an old-school physical card catalog for the entry (so much for the vaunted age of digital research). Although it was too late to include the document in my article, at least I had time to include it in my dissertation. But what if I could include it in the article?

The perfectionist temptation can be disastrous. No doubt this impulse to continually revise is what led George Lucas to update the first three Star Wars films with new scenes and special effects. Many fans thought that the changes ruined the experience of the original artifacts. It may be better in some cases to leave well enough alone. Yet there is something to be said for revision. One of the things I love about the Slavery Portal is that it is constantly evolving. I am always adding new material or tweaking the interface. When I find a mistake, I fix it. When new data makes an older entry obsolete, I update it. Writing History in the Digital Age, a serious work of scholarship that is also technologically sophisticated and experimental, uses Commentpress to enable paragraph-by-paragraph annotation of its content. Thus a peer review process that is usually conducted in private among a small group of people over a long period of time becomes something that is open, immediate, collaborative, and democratic. Projects like this have landmarks, qualitative leaps, or nodal points, just like software that jumps from alpha stage to beta release or version 10.4.11 to 10.5. But they are always in process. For every George Lucas, there is a Leonardo da Vinci. The Florentine Master only completed around fifteen paintings in his lifetime and was a consummate procrastinator. His extensive manuscript collection remained unpublished at the time of his death and largely unavailable for a long time thereafter. What if da Vinci had a blog? (I can just imagine the comment thread on Vitruvian Man: “stevexxx37:  wuz up wit teh hair? get a cut yo hippie lolz!”)

Although I sometimes still agonize about fixes or changes I could make to older work, I have found that dispensing with the whole pretense of permanency can be tremendously therapeutic. Rather than obsess over writing a flawless dissertation, I have come to embrace imperfection. I have come to view my thesis or my scholarly articles not as end products, but as steps in a larger progression. In a sense, they are still drafts. In the sense that we are always revising and refining our understanding of the past, all history is draft. Static books and articles are essential building blocks of our historical consciousness. It is hard to imagine a world where the book I cite today might not be the same book tomorrow. And yet, to a certain extent, we live in that world. When Apple finds a security loophole or a backwards compatibility issue in its software, it releases a patch. If I find a typo or an inaccuracy in this post three days from now, I can fix it immediately. If I come across new information a year later, I can make a revision or post a follow-up. Everything is process. The other day, I updated the firmware on a picture frame.

I will, of course, continue to aim for the most polished, the most perfect work of which I am capable. As much as I would like, I cannot write my dissertation as a blog post. I will edit and revise, edit and revise. Sometimes you do not know what you need to revise until you make it permanent. At the end, maybe, I will have a landmark. And I will welcome its insufficiency. There is something liberating about being incompl…

Digital History 2.0

The title of this blog is intentionally oxymoronic. Digital History stands for the fresh, the new, the innovative; Yale is a byword for the venerable, the traditional, and the conservative. The two terms exist in an awkward tension. I have always thought that if the digital humanities – as a methodology, as a practice, as a discipline – could thrive at a place like Yale, they could thrive anywhere. As an arbiter of the establishment, Yale offers a challenging test case for the digital revolution. The Past’s Digital Presence, a conference hosted here two years ago, was an important first step. (Most of the conference presentations are now available online, so if you missed it the first time around, you can relive it at home!) Exciting new initiatives like Historian’s Eye or the recently adopted Digital Himalaya project, show Yale faculty experimenting with new forms and engaging new technologies to drive their scholarship.

In this forward-looking spirit, I am proud to announce the rebirth of Digital History at Yale as a group blog. So keep a lookout for some new names in the time to come – graduate students like myself who have a thing or two to say about the digital humanities, or whatever else is on their mind.