Tag Archives: First World Problems

Combine JPEGs and PDFs with Automator

leninchristmasLike most digital historians, my personal computer is packed to the gills with thousands upon thousands of documents in myriad formats and containers: JPEG, PDF, PNG, GIF, TIFF, DOC, DOCX, TXT, RTF, EPUB, MOBI, AVI, MP3, MP4, XLSX, CSV, HTML, XML, PHP, DMG, TAR, BIN, ZIP, OGG. Well, you get the idea. The folder for my dissertation alone contains almost 100,000 discrete files. As I mentioned last year, managing and preserving all of this data can be somewhat unwieldy. One solution to this dilemma is to do our work collaboratively on the open web. My esteemed colleague and fellow digital historian Caleb McDaniel is running a neat experiment in which he and his student assistants publish all of their research notes, primary documents, drafts, presentations, and other material online in a wiki.

Although I think there is a great deal of potential in projects like these, most of us remain hopelessly mired in virtual reams of data files spread across multiple directories and devices. A common issue is a folder with 200 JPEGs from some archival box or a folder with 1,000 PDFs from a microfilm scanner. One of my regular scholarly chores is to experiment with different ways to sort, tag, manipulate, and combine these files. This time around, I would like to focus on a potential solution for the latter task. So if, like most people, you have been itching for a way to compile your entire communist Christmas card collection into a single handy document, today is your lucky day. Now you can finally finish that article on why no one ever invited Stalin over to their house during the holidays.

Combining small numbers of image files or PDFs into larger, multipage PDFs is a relatively simply point-and-click operation using Preview (for Macs) or Adobe Acrobat. But larger, more complex operations can become annoying and repetitive pretty quickly. Since I began my IT career on Linux and since my Mac runs on a similar Unix core, I tend to fall back on shell scripting for exceptionally complicated operations. The venerable, if somewhat bloated, PDFtk suite is a popular choice for the programming historian, but there are plenty of other options as well. I’ve found the pdfsplit and pdfcat tools included in the latter package to be especially valuable. At the same time, I’ve been trying to use the Mac OS X Automator more often, and I’ve found that it offers what is arguably an easier, more user friendly interface, especially for folks who may be a bit more hesitant about shell scripting.

What follows is an Automator workflow that takes an input folder of JPEGs (or PDFs) and outputs a single combined PDF with the same name as the containing folder. It can be saved as a service, so you can simply right-click any folder and run the operation within the Mac Finder. I’ve used this workflow to combine thousands of research documents into searchable digests.

Step 1: Open Automator, create a new workflow and select the “Service” template. At the top right, set it to receive selected folders in the Finder.

Step 2: Insert the “Set Value of Variable” action from the library of actions on the left. Call the variable “Input.” Below this, add a “Run Applescript” action and paste in the following commands:

on run {input}
tell application "Finder"
set FilePath to (container of (first item of input)) as alias
end tell
return FilePath
end run

Add another “Set Value of Variable” action below this and call it “Path.” This will establish the absolute path to the containing folder of your target folder for use later in the script. If this is all getting too confusing, just hang it there. It will probably make more sense by the end.

combinesmallStep 3: Add a “Get Value of Variable” action and set it to “Input.” Click on “Options” on the bottom of the action and select “Ignore this action’s input.” This part is crucial, as you are starting a new stage of the process.

Step 4: Add the “Run Shell Script” action. Set the shell to Bash and pass input “as arguments.” Then paste the following code:

echo ${1##*/}

I admit that I am cheating a little bit here. This Bash command will retrieve the title of the target folder so that your output file is named properly. There is probably an easier way to do this using Applescript, but to be honest I’m just not that well versed in Applescript. Add another “Set Value of Variable” action below the shell script and call it “FolderName” or whatever else you want to call the variable – it really doesn’t matter.

Step 5: Add another “Get Value of Variable” action and set it to “Input.” Click on “Options” on the bottom of the action and select “Ignore this action’s input.” Once again, this step is crucial, as you are starting a new stage of the process.

Step 6: Add the action to “Get Folder Contents,” followed by the action to “Sort Finder Items.” Set the latter to sort by name in ascending order. This will assure that the pages of your output PDF are in the correct order, the same order in which they appeared in the source folder.

Step 7: Add the “New PDF from Images” action. This is where the actual parsing of the JPEGs will take place. Save the output to the “Path” variable. If you don’t see this option on the list, go to the top menu and click on View –> Variables. You should now see a list of variables at the bottom of the screen. At this point, you can simply drag and drop the “Path” variable into the output box. Set the output file name to something arbitrary like “combined.” If you want to combine individual PDF files instead of images, skip this step and scroll down to the end of this list for alternative instructions.

Step 8: Add the “Rename Finder Items” action and select “Replace Text.” Set it to find “combined” in the basename and replace it with the “FolderName” variable. Once again, you can drag and drop the appropriate variable from the list at the bottom of the screen. Save the workflow as something obvious like “Combine Images into PDF,” and you’re all set. When you right-click on a folder of JPEGs (or other images) in the Finder, you should be able to select your service. Try it out on some test folders with a small number of images to make sure all is working properly. The workflow should deposit your properly-named output PDF in the same directory as the source folder.

To combine PDFs rather than image files, follow steps 1-6 above. After retrieving and sorting the folder contents, add the “Combine PDF Pages” action and set it to combine documents by appending pages. Next add an action to “Rename Finder Items” and select “Name Single Item” from the pull-down menu. Set it to name the “Basename only” and drag and drop the “FolderName” variable into the text box. Lastly, add the “Move Finder Items” action and set the location to the “Path” variable. Save the service with a name like “Combine PDFs” and you’re done.

This procedure can be modified relatively easily to parse individually-selected files rather than entire folders. A folder action worked best for me, though, so that’s what I did. Needless to say, the containing folder has to be labeled appropriately for this to work. I find that I’m much better at properly naming my research folders than I am at naming all of the individual files within them. So, again, this process worked best for me. A lot can go wrong with this workflow. Automator can be fickle, and scripting protocols are always being updated and revised, so I disavow any liability for your personal filesystem. I also welcome any comments or suggestions to improve or modify this process.

The $14 Million Question

Yesterday a copy of the Bay Psalm Book, the first book composed and printed in British North America, sold at auction for a record-breaking $14.16 million. Members of Boston’s Old South Church decided to sell one of their two copies to help fund their cash-strapped congregation, and while the amount fell short of the auction house estimate of $15-30 million, it is certainly enough to buy a whole lot of snazzy sermons, baptismal fonts, and really uncomfortable pews. A number of talented and distinguished historians, including Jill Lepore and David Spadafora, have weighed in on the broader context and significance of this standard devotional text, printed in the fledgling Massachusetts Bay Colony in 1640. Amid all of the excellent scholarly analysis and public humanities work, however, no one seems to be asking the big question: why is someone willing to pay millions of dollars for a book that anyone with an internet connection can get for free? In an age of increasingly universal digitization, when nearly every major print publication prior to 1923 is available online, why do some public domain printed books sell for princely sums?

In 1947, when the last Bay Psalm Book sold at auction for $151,000, a researcher needed to physically travel to a major library in order to view an original copy. In the Northeast, there were plenty of optionsYale, Harvard, Brown, the Boston Public Library, the New York Public Library, the American Antiquarian Society. South of New York City, there was nothing. West of the Appalachians, the only choice was the private Huntington Library in California – and their copy was missing seven pages, including the title page. The only copy available to researchers outside of the United States was at the Bodleian Library at the University of Oxford. Bibliophiles published facsimile editions as early as 1862, but their production and circulation were limited. Depending on how far one had to travel, and factoring in layover times, scheduling, family and work obligations, and local arrangements, the onetime cost of consulting this small piece of religious history could be enormous. Gripes about the digital divide notwithstanding, the analog divide was and is much worse.

In 2013, copies of the the Bay Psalm Book are everywhere – the Library of Congress, the World Digital Library, even the Old South Church. In fact, almost every single book, pamphlet, and broadside published in colonial America is available for free online or at a participating library through Readex’s Early American Imprints series. Yale’s copy of the Bay Psalm Book, which, coincidentally, was the one purchased at the aforementioned auction in 1947, is available in full here. That book sold for the equivalent of about $1.5 million in present-day terms. No copies of this august tome have been discovered or destroyed since 1947. So why is the same book worth over $14 million today? What accounts for this tenfold increase in value?

I can think of several reasons why someone would pay so much for a book that is available to everyone for free. If there are significant deviations or marginalia between and among different copies or editions, each copy is more or less unique and thus uniquely valuable. Yet the differences among the various Bay Psalm Books are fairly well documented by this point and are not that extreme. Another reason might be personal profit or prestige. To his credit, David Rubenstein, the billionaire investor who purchased the book at yesterday’s auction, plans to loan it out to libraries around the country and to place it on deposit with a public institution. Although he may derive a good deal of personal satisfaction from this arrangement, I do not think that private gain is his primary goal. That leaves one more motive – the simple pleasure of the physical artifact.

The Early Dawn - rarer than the Bay Psalm Book and just as significant, but considerably less expensive. Courtesy of Special Collections, Yale Divinity School Library.
The Early Dawn – rarer than the Bay Psalm Book and just as significant, but considerably less expensive. Courtesy of Special Collections, Yale Divinity School Library.

Perhaps one reason why the value of the Bay Psalm Book has increased ten times over the past 60 years is that paper, photographic, and digital reproductions have increased exponentially over the same period. In an era of digital alienation, there is greater romance in the physical object. To touch, to feel, to smell, even to be in the near presence of a famous text creates a kind of living connection with history. Such documents become, as Jill Lepore writes of the United States Constitution, “a talisman held up against the uncertainties and abstractions of a meaningless, changeable, paperless age.”

This is nothing new, of course. Since the days when early Christians passed around the head of Saint Paul or the foreskin of Jesus, and probably long before that, people have always been fascinated by sacred relics. Presumably, this is why so many tourists flock to see the original Declaration of Independence or the Wright Flyer in Washington D.C. One can read almost everything there is to know about the Declaration or the Wright brothers on an iPad while waiting in line at Stop & Shop, but there is something ineffably special about being in the presence of the real thing.

Even so, what justifies such an outrageous price tag? There are almost a dozen copies of the Bay Psalm Book, all available, to some extent, to the public. And there are plenty of rare and valuable historical documents that seldom see the light of day. A few years ago, I found an 1864 edition of the Early Dawn for sale online for less than $200. Published by American abolitionists at the Mendi Mission in West Africa starting in 1861, it is a periodical that ties together the struggles against slavery and racism across two continents. It is invaluable to our understanding of global politics, history, religion, and the state of our world today. In this sense, it is just as significant as the Bay Psalm Book. It is also extremely rare. As far as I know, there is only one other extant issue from the same time period. Fortunately, I was able to convince my colleagues at the Yale Divinity School to purchase and properly preserve this one-of-a-kind artifact so that it would be available for future researchers (click the image above for a full scan of the paper). I am sure that every historian who has worked on a major project has a story similar to this. If not an online purchase, then it is a special document found in an archive, or an especially moving oral history.

There are countless unique and historically significant documents and manuscripts moldering in libraries and repositories around the world. Some of them are true gems, just waiting to be discovered. Most of them remain unavailable and unknown. And yet our society sees nothing wrong with a private citizen spending a small fortune to acquire a copy of the the Bay Psalm Book. There is no question that the venerable Old South Church deserves our support, and I have no doubt that its congregants do important work in their community and abroad. But how many lost treasures could have been brought to the world for the first time for the cost of this single public domain text? How much digitization, transcription, or innovation could $14.16 million buy?

Cross-posted at HASTAC

Mal d’Archive

You know you’re a pretentious academic blogger when you start titling your posts in French, and if you can quote one of the most notoriously abstruse French philosophers at the same time, well that’s just a bonus. Jacques Derrida is not much in style these days (if he ever was). His ideas, and especially his prose, have been the butt of many jokes over the past half-century, but his 1994 lecture series Mal d’Archive (later published and translated as “Archive Fever“) is a significant artifact of the early days of the digital revolution. Although I don’t quite agree with everything its author says, the book makes an earnest attempt to grapple with the intersection of technology and memory and offers some worthwhile insight.

An archivist works feverishly.

The idiomatic en mal de does not have a direct analogue in English, but for Derrida it means both a sickness and “to burn with a passion.” It is an aching, a compulsive drive (in the Freudian sense) to “return to the origin.” It is the sort of fever rhapsodized by Peggy Lee, the kind of  unquenchable desire that can only be remedied by more cowbell. Whatever Derrida means by archive fever (and I think he leaves its precise meaning deliberately ambiguous), it is a concept that has some resonance for historians. As a profession, we tend to privilege primary sources, or archival documents, over secondary sources, or longer works that analyze and interpret an archive. Yet even the most rudimentary archival fragment contains within it a narrative, a story, an argument. Every document is aspirational; every archive is also an interpretation. There is no such thing as a primary source. There are only secondary sources. We build our histories based on other histories. The archive, Derrida reminds us, is forever expanding and revising, preserving some things and excluding others. The archive, as both subject and object of interpretation, is always open-ended, it is “never closed.”

Of course, in a few weeks, in what can only be described as a stunning disregard for French philosophy, the Georgia State Archives will literally shut its doors. Citing budget cuts, the state announced it will close its archives to the public and restrict access to special appointments (and those appointments will be “limited” due to layoffs). For now, researchers can access a number of collections through the state’s Virtual Vault, but it is not clear whether more material will be added in the future. The closure comes at the behest of governor Nathan Deal, whose recent political career has been beset by ethics violations. The cutbacks are the latest in a string of controversial decisions by the Georgia governor, including the rejection of billions of dollars in medicare funds and a $30 million tax break for Delta Airlines, and will have a negative impact on government transparency. Coming on the heels of the ban on ethnic studies in Arizona, the campaign against “critical thinking” in Texas, attacks on teachers in Illinois and Wisconsin, and deep cuts in public support for higher education across the country, the news from Georgia seems a portent of dark times.

Archives are so essential to our understanding of the past, and our memory of the past is so important to our identity, that it can feel as if we have lost a little part of ourselves when one is suddenly closed, restricted, or destroyed. Historian Leslie Harris calls public archives “the hallmarks of civilization.” Although I don’t entirely agree (are groups that privilege oral tradition uncultivated barbarians?), Harris points to a fundamental truth. The archive is an integral component of a society’s self-perception. Without open access to archival collections, who could corroborate accusations that the government was conducting racist medical experiments? Who would discover the lost masterpiece of a brilliant author? Who would provide the census data to revise wartime death tolls? Who would locate the final key to unlock the gates of Hell? All of the boom and bluster about digitization and the democratization of knowledge notwithstanding, it is easy to forget that archival work is a material process. It takes place in actual physical locations and requires real workers. What does it mean for the vaunted Age of Information when states restrict or close access to public repositories?

However troubling the news from Georgia, all hope is not lost. This is not the end of days. Knowledge workers are fighting to preserve access to the archive. At the same time, efforts by historians to crowdsource the past offer a fascinating and potentially momentous expansion of archive fever. Several high profile projects are now underway to enlist “citizen archivists” to help build, organize, and transcribe documentary collections. Programmers at the always-innovative Roy Rosenzweig Center for History and New Media have just released a “community transcription tool” that will (hopefully) streamline the process of collaborative archiving, transcribing, and tagging across platforms. The potential for public engagement and the production of new knowledge is stupendous. Because they rely on the same volunteer ethos as Wikipedia, however, it is likely that part-time hobbyists will be more interested in parsing obscure Civil War missives than the correspondence of Jeremy Bentham. A citizen archivist with a passion for Iroquois genealogy might have little interest in, let’s say, the municipal records of East St. Louis. And this is precisely where major repositories and their well-trained staff can help supervise, guide, and even lead the public. What if every historian could upload all of their primary sources to a central repository when they finished a project? What if there was a universal queue where researchers could submit manuscripts for public transcription, along the lines of the now-ubiquitous reCAPTCHA service? Perhaps administrators could implement some sort of badge or other incentive program in exchange for transcribing important material? As all manner of documents are digitized, uploaded, and transcribed in a lopsided, haphazard, and ad-hoc fashion, in vastly disparate quality, in myriad formats, in myriad locations, physical archives and their staff are needed more than ever – if only to help level the playing field. Among the most important functions of the professional archivist is to remind us that there is much that is not yet online.

Note recording the arrival of the Amistad survivors in Freetown, Sierra Leone, Jan. 1842. Liberated African Register, Sierra Leone Public Archives, Freetown.

One of the best experiences I’ve ever had as a researcher was in the national archives of Sierra Leone. Despite a century and a half of colonialism, a decades-long civil war, and other challenges that come with occupying a bottom rung on the global development index, the collections remain open to the public and continue to grow and improve. They have even started to go digital thanks to some help from the British Library and the Harriet Tubman Resource Centre. Sitting in the Sierra Leone archives, with its maggot-bitten manuscripts, holes in the windows, and sweltering heat, suddenly the much-discussed global digital divide seems very real. Peering out of the window one day, as I did, to see a mass of students drumming and chanting, then chased by soldiers in riot gear, the screams from the crowd as you shield yourself from gun fire behind a bookshelf thick with papers, it is difficult to look at knowledge work the same way again. When I enter a private archive in the United States, with its marbled columns and leather chairs, its rows of computers and sophisticated security cameras, I am grateful and angry – grateful that this is offered to some, angry that it is denied to others. The archivists and their support team in Freetown are heroes. Full stop. I worry about them when I read about the conflict in Libya, which continues to spill across borders and has led indirectly to the destruction of priceless archives and religious monuments in Mali.

Compared to the situation in West Africa, the more modest efforts to preserve and teach the past across the United States seem like frivolous first world problems. On the other hand, all information is precious. Whether physical or digital, access to our shared heritage should not be held hostage to political agendas or economic ultimatums. Archives are a right, not a privilege. I like to think that Derrida, who grew up under a North African colonial regime, would appreciate this. If Sierra Leone can keep its archives open to the public, why can’t the state of Georgia?

Cross-posted at HASTAC

Eternal Sunshine of the Spotless Draft

I am an inveterate Mac user. Some might say I’m a fanboy. Although I like to think that my brand loyalty is due to a cleaner, easier, more pleasing operating experience, there are other factors. Part of my attraction stems from the “Think Different” ad campaign of my youth – flattering for any impulsive iconoclast. Or maybe it’s that soothing chime. I don’t agree with everything Apple has ever done, especially now that they’ve thundered into the mainstream, but I still think that, when all is said and done, they can produce a better quality product than the competition (now if only they could do it humanely). Apple devices are marketed as polished, eloquent, intuitive. A common complaint about Microsoft, on the other hand, is that they have trouble releasing a finished product. Windows is notorious for being incomplete, buggy, awkward, in need of an endless cascade of updates and service packs. Of course, Mac OS X, Linux, Android, and every other decent piece of software does exactly the same thing. OS X has endured at least seven major revisions in the past decade, while Windows has suffered maybe three (it all depends on your definition of “major revision”). This endless turnover used to bother me. Does Firefox really need to release a new version every other day? How much useless bloat can software designers cram into MS Word before it finally explodes? Lately, however, I’ve come to accept and even embrace this radical incompleteness.

The age of static print was defined by permanence. Authors and editors had to work for a long time on multiple drafts, revisions, and proofs. The result was a clay tablet, or a scroll, or a codex book. With the onset of the printing press, it was easier to make corrections. Movable type could be reset and rearranged to create appended, expanded, and revised editions. Still, the emphasis was on stability. The paperback book I have on my desk right now looks pretty much exactly the same as it did when it was first published in 1987. And it will always look that way. A lot of effort went into its publication because it would be extremely difficult to revise it. It is a stable artifact. Digital culture, on the other hand, is a permanent palimpsest. What is here today is gone tomorrow, all that is solid melts into air. Digital publications do not have to be fully polished artifacts because they can be endlessly revised. There are benefits and drawbacks to this state of almost limitless transition. But now that the Encyclopedia Britannica has thrown up its hands and shuttered its print division, perhaps it is worth asking: what do we have to gain from adhering to a culture of permanence?

In the world of static print, errors or inaccuracies are irreversible. Filtration systems, such as line editing or peer review, help to mitigate against this problem, but even the most perfectionist among us are not immune from good faith mistakes. We have all had those moments when we come across a typo or an inelegant phrase that makes us cringe with regret. How wonderful would it be to correct it in an instant? And why stop at typos? Less than a year after I published an article on abolitionist convict George Thompson, I was wandering around in the vast annex where my school’s library dumps all of its old reference books. Here were hoary relics like the National Union Catalog or the Encyclopedia of the Papacy. I picked up a dusty tome and, by dumb luck, found an allusion to Thompson’s long-lost manuscript autobiography. When I wrote the article I had scoured every database known to man over the course of two years, including WorldCat and ArchiveGrid. But the manuscript, which was filed away in some godforsaken corner of the Chicago History Museum, had no corresponding  entry in any online catalog. I had to e-mail the museum staff and wait while a kindly librarian checked an old-school physical card catalog for the entry (so much for the vaunted age of digital research). Although it was too late to include the document in my article, at least I had time to include it in my dissertation. But what if I could include it in the article?

The perfectionist temptation can be disastrous. No doubt this impulse to continually revise is what led George Lucas to update the first three Star Wars films with new scenes and special effects. Many fans thought that the changes ruined the experience of the original artifacts. It may be better in some cases to leave well enough alone. Yet there is something to be said for revision. One of the things I love about the Slavery Portal is that it is constantly evolving. I am always adding new material or tweaking the interface. When I find a mistake, I fix it. When new data makes an older entry obsolete, I update it. Writing History in the Digital Age, a serious work of scholarship that is also technologically sophisticated and experimental, uses Commentpress to enable paragraph-by-paragraph annotation of its content. Thus a peer review process that is usually conducted in private among a small group of people over a long period of time becomes something that is open, immediate, collaborative, and democratic. Projects like this have landmarks, qualitative leaps, or nodal points, just like software that jumps from alpha stage to beta release or version 10.4.11 to 10.5. But they are always in process. For every George Lucas, there is a Leonardo da Vinci. The Florentine Master only completed around fifteen paintings in his lifetime and was a consummate procrastinator. His extensive manuscript collection remained unpublished at the time of his death and largely unavailable for a long time thereafter. What if da Vinci had a blog? (I can just imagine the comment thread on Vitruvian Man: “stevexxx37:  wuz up wit teh hair? get a cut yo hippie lolz!”)

Although I sometimes still agonize about fixes or changes I could make to older work, I have found that dispensing with the whole pretense of permanency can be tremendously therapeutic. Rather than obsess over writing a flawless dissertation, I have come to embrace imperfection. I have come to view my thesis or my scholarly articles not as end products, but as steps in a larger progression. In a sense, they are still drafts. In the sense that we are always revising and refining our understanding of the past, all history is draft. Static books and articles are essential building blocks of our historical consciousness. It is hard to imagine a world where the book I cite today might not be the same book tomorrow. And yet, to a certain extent, we live in that world. When Apple finds a security loophole or a backwards compatibility issue in its software, it releases a patch. If I find a typo or an inaccuracy in this post three days from now, I can fix it immediately. If I come across new information a year later, I can make a revision or post a follow-up. Everything is process. The other day, I updated the firmware on a picture frame.

I will, of course, continue to aim for the most polished, the most perfect work of which I am capable. As much as I would like, I cannot write my dissertation as a blog post. I will edit and revise, edit and revise. Sometimes you do not know what you need to revise until you make it permanent. At the end, maybe, I will have a landmark. And I will welcome its insufficiency. There is something liberating about being incompl…

Floating Universities

In late September 1926, the SS Ryndam departed Hoboken, New Jersey, on a journey around the world. Dubbed the “Floating University,” she housed about 500 students, representing almost 150 different colleges, and dozens of professors and administrators.  Among the world-class faculty were representatives from Clark University and Williams College, the universities of Michigan, Missouri, New York, Texas, Washington, Turin, and Vienna, and the former governor of Kansas. Almost one third of the students were freshmen, who would earn “[f]ull credit for courses passed” when they returned “to stationary education.” The brainchild of New York University psychology professor James Lough, the charmingly-named “University World Cruise” “visited 35 countries and more than 90 cities,” from Shanghai to Oslo, before returning in May 1927. A precursor of modern study abroad programs, the cruise marked a turning point in the globalization of American education. The goal, as Lough put it, was “to train students to think in world terms.” Within months of its return, educators started formulating a floating high school to supplement the curriculum.

The new Floating University, launched in Fall 2011, does not literally float, but it is no less ambitious. Its signature course, entitled “Great Big Ideas,” purports to offer “the key takeaways of an entire undergraduate education.” In a series of twelve video lectures, students receive “a survey of twelve major fields delivered by their most important thinkers and practitioners.” Topics range from physics to philosophy and feature an all-star cast of instructors from Columbia, Harvard, Yale, and the University of Chicago. Lawrence Summers, the former president of Harvard, offers his thoughts on education. Dean of Yale Admissions Jeffrey Brenzel recommends his “Top 10 Classics” in high definition video “featuring Hollywood production values.” There is even a lecture on investment strategy by superstar hedge fund manager William Ackman. Students at Harvard, Yale, and Bard College can enroll in the course for credit. Others can buy a six-month subscription for the low, low price of $199 (Ackman’s video, “Who Wants to Be a Billionaire?” is available in “enhanced stand-alone format” for $59.99).

Unlike the original Floating University, this most recent iteration has only one course and is not focused on world experience. Instead, it brings the world to you. Although students taking the course for credit meet in person for a weekly seminar, for the most part it is an experiment in structured independent learning. The motive force behind this latest floater is not a professor. Rather, it is real estate mogul Adam Glick. The university is, in fact, a for-profit venture of Glick’s Jack Parker Corporation and Big Think (a website that aggregates what it deems “the most important ideas” of today). While James Lough dreamed of educating global citizens, Glick’s concerns are much more prosaic. “[I]n my business, I was having difficulty hiring generalists,” he said in an interview last year. “Most people had graduated college in the silos of particular majors. They were very, very smart, but didn’t have a lot of perspective.”

I have mixed feelings about Glick’s university. On the one hand, it embodies a collaborative, interdisciplinary spirit that is in great demand these days. Despite the hype, it does make important topics and world-class intellectuals available to almost anyone. Certainly, its broad scope and accessible format will encourage students “to think in world terms.” On the other hand, I am skeptical of running essential public services (such as higher education) as for-profit industries. The inclusion of hedge fund manager Ackman, for example, smacks of a tacky infomercial. There are digital alternatives that offer similar content for no cost whatsoever. Academic Earth, to which Yale contributes, hosts dozens of world-class courses, including a blockbuster series on the Civil War by my adviser. (Learn about the election of 1860 while munching a bowl of popcorn; ponder the shortcomings of the Freedmen’s Bureau while waiting for the bus!) The famous Khan Academy offers a plethora of lectures and tutorials that are like watching a filmed version of Wikipedia. Of course, as every digital humanist knows, “lectures are bullshit.”

Perhaps the most troubling oddity about “Great Big Ideas” is that it has no content in history. Zero. Zilch. Nada. Nothing. Not even close. Is history not a “major field” with “important thinkers?” Even more baffling, history instruction provides exactly the kind of big-picture-oriented, theoretically-grounded, interdisciplinary knowledge to which “Big Ideas” aspires. History is a discipline of disciplines; in their attempts to make sense of the past, historians draw on anthropology, archeology, computer science, demography, economics, film studies, geography, linguistics, literature, musicology, philosophy, physical and natural science, psychology, social theory, and statistics.

History is a vast laboratory of humanity; it illuminates the present by tracing the trajectories of the past. It is, as Robert Penn Warren put it, “always a rebuke to the present.” It tells us that the original Floating University had difficulty raising funds after its maiden voyage, and that it floundered in the wake of the 1929 market crash. If the new Floating University is to avoid a similar fate, it might do with a lesson in history.