Blog Post

Notes from #aha17: Day 2

A lighter day on my end – and in Denver, where the sun came out and I had a view of the Rockies from my window. Win.

My brain is pretty mushy from trying to figure out a bunch of digital history tools at the moment, but here’s the (briefer than yesterday) highlights:


Praise for a colleague

First and foremost, Jordan Reed, fellow Drew grad student and digital historian, was one of today’s most tweeted persons thanks to his presentation with SHARP (Society for the History of Authorship, Reading, and Publishing)! Woot!


#aha17 #s110: Collaborative Digital History

Great panel/roundtable from Stephen Robertson, Jim Clifford, Ian Milligan, Emily Merchant, and Myron Gutmann – and the audience, which was gain full of live tweeters. (Seriously – such a joy to tweet with other people!)

The takeaways for me:

Digital history can be learned as projects are in-progress.

Woot! Every collaborator – even/especially grads and undergrads – deserves reward/credit for work

Because history is always-already collaborative – we just don’t tend to make that explicit

Do need to take care as we consider how to treat student collaborators, though. What work should be public? What work should be withheld? How are we ensuring that students have a clear and respected say?

Odds and ends about who we write for and what digital projects mean for securing jobs and/or gaining tenure


#aha17 #s117: Digital Drop-In

Pretty much sums up how I feel about this session:

Digital history is still a new community for me – but it is a community as far as I can tell. And a remarkably supportive, interested, and creative one in which resources are made to be shared.

Jeff McLurken welcomed me at the door, listened patiently to my project description and skills needs, and then pointed me to two different digital historians/humanists who had great suggestions for tools to use for data analysis.

I had the chance to speak with Ian Milligan again and he kindly re-demonstrated some of the web scraping tools from yesterday (Voyant and DocNow). I’m still putzing around with these tools and figuring out how to make them work for my needs, but I’m feeling on firmer ground with the dissertation after the drop-in session.

Blog Post

Notes from #aha17: Day 1

giphy
From GIPHY

It’s 4° F in Denver. So obviously one of the most shared images on Twitter this morning was Jon Snow. This sort of thing might be what led to these shenanigans on Channel 9 News: History Buffs Tweet About Snow, Hilarity Ensues.

It is my intention to write a brief summary of each day at #aha17 (American Historical Association’s Annual Meeting) in Denver – but goodness knows I never finish series of blog posts. So this might just be a one-off thing. Here’s the highlights from today. Readers, beware. Herein lies an excessive number of links…


Personal Odds and Ends:

I was so grateful when presenters shared links to slides today! It meant I could happily toggle between tweeting, exploring the digital projects discussed, and browsing slides and links.

Okay. I give in. I’ll start providing slides before class. (It’s good to be a pseudo-student sometimes…)


#aha17 #gsdh: Getting Started in Digital History

Link to AHA Program

I attended the session on Web Scraping, led by Ian Milligan (@ianmilligan1). If you’d like to browse the slides and links, Ian was good enough to provide all of the materials for the session on his website.

Web scraping, for those of you unfamiliar with the term, means pulling all sorts of basic info off of a website. For instance, if you use a web tool (like import.io) on a website like a database of song lyrics (such as this example from Ian this morning), you can run a URL with the web tool and it will extract information like song title, artist, and relevant links from the webpage. This information can then be exported into a Comma Separated Values (.csv) file – pretty much an Excel file with a different ending. That data can then be run through any number of analysis tools (we used Voyant Tools) to study things like word frequency, spikes in popularity, and the context of specific words, people, or places.

For me, I plan to apply similar tools and methods for my social-media based dissertation. We spent some time practicing web scraping social media using Doc Now, which lets you run a hashtag on a given day, pulls all of the tweets and related RTs, and then allows you to export the data for analysis. Super useful given that I’m hoping to analyze upwards of 200 tweets per class meeting this semester…

After the Web Scraping Workshop, we broke out for lunch and “table talks” hosted by faculty and alt-acs who shared their experience in public history, choosing digital humanities tools, sustaining digital projects, and funding digital projects, among other digital humanities (DH) topics.

I attended the informal talk on DH jobs led by Rebecca Wingo who offered helpful advice about what jobs were out there, what degree programs might be most useful, and what additional certifications/experience would be useful for pursuing a DH job. The takeaway for me was a confirmation of the usefulness of George Mason University’s DH certificate program (which I may be looking at in the future) and her suggestion to attend digital history training opportunities to acquire skills and experience as needed including:

For the second round, I headed for the Grading Digital Projects table led by John Rosinbum.  We talked about timeline assignments, rubrics, and citations and – good news for the next round of #hwc111 students – I’m more thoroughly convinced of the necessity of rubrics. So, rubrics coming for Spring 2017 blogging project! Also probably and more thorough and interactive conversation about why and how to cite sources on the web. Spread the news, dear students…


#aha17 #s22: Historical Sources as Data: Opportunities and Challenges

Link to AHA Program

Wowsa. When you attend a #dighist session, everybody live tweets! Which was a great thing because the presentations given by Kalani Craig, Lauren Tilton, and Brandon Locke were brilliant and useful and challenging.

All three presentations challenged listeners to consider how best to reach wider audiences in clearer ways by:

  • Bringing information out from behind paywalled collections (i.e., only available to institutions with money, like Proquest or JSTOR) in legal, but accessible ways through the use of good old copy-and-paste, data compilation, and natural language processing
  • Shedding light on lesser-known, but exceptionally important figures and places in history through network analysis and comprehensive metadata for images and sources
  • Making transparent our methodologies and sources so other scholars can assess and help us grow our process and data can remain reusable.

This session was, I swear, more compelling than I’m making it sound. I highly recommend checking out the projects driven by the presenters for a better sense of how innovative and important their work is:


#aha17 #s31: A Retrospective on Tuning: Where Have We Been and Where Should We Go?

Link to AHA Program

Yup, all the live tweets for this one are mine. Because when you don’t go to a #dighist session, sometimes you’re the only one tweeting. Ah well.

I’m still processing this one. I like the ambitions of the Tuning Project. The idea is to host, coordinate, and focus conversations about what faculty want history majors to be able to do when they finish the degree.

The aim of the three-year project has been to help establish guidelines useful to history departments across the United States and to foster a more natural language surrounding historical skills so students have a fuller stake in course assessments and outcomes. The project is faculty (not admin) driven, it has increased the AHA’s emphasis on teaching, and the panelists today seemed committed to bringing a wider variety of educators into the conversation in the future.

The focus of the project is also shifting from majors to introductory courses, which I (selfishly) think is a great move given that this is what I teach.

I’m not totally sold though. The project itself still requires a lot of explanation – at least for those of us who aren’t really part of history departments. I don’t know that there are a ton of resources or training on site for new college and university teachers to implement the suggested Tuning outcomes in effective ways (though Anne Hyde did note the increasing presence and usefulness of centers for teaching and learning on campuses). I’m also still not certain how much students value the language of transferable skills in general education courses… But then I haven’t really asked how they feel about it. (Maybe I will in the near future.)

It was a thought-provoking session one way or another and I’m grateful for the conversations led by Elaine Carey, Anne Hyde, Elizabeth Lehfeldt and Daniel McInerney in the field of history education.


Receptions

I finished out the day at the grad student reception (met a Masters student from University at Buffalo and chatted about pre-modern China) and the Twitterstorians/bloggers reception. It turns out that if you hang out long enough, you meet people who recommend awesome medieval Tumblrs, Baltimore tours, and scholars of history teaching and learning. Also they had Denver-brewed beer. Win.