In the spirit of Amy Ko’s many public trip reports, I’ve decided to write and share trip reports from my adventures, both at home and abroad.
Reflections on CHI'23
I admit that I felt vaguely reluctant to go. It was far away, during the last week of class, and felt indulgent—like a luxury—when I had so much here at home in Cambridge to do. Despite how well I understand in my bones the unparalleled benefits of physical co-presence for communication and rapport-building, it is difficult to measure its impact compared to many other academic activities, like counting hours of writing towards a grant submission. Part of what makes it delightful is that you’re never quite sure what will come out of your time at the conference, and this CHI was no exception.
I was committed to going, of course, because I had been graciously invited by Dan Russell, my former Google internship mentor, to co-teach a CHI course on Human-AI interaction. I also wanted to be there to see the community’s response to the PaTAT paper I co-advised with Toby Li, the Late Breaking Work on an Accessible Text Framework led by our former visiting scholar Dr. Hendrik Heuer, and my graduate seminar student Nikhil Singh’s presentation of our TOCHI paper on LLM-based multi-modal writing assistance that came out of the mixed MIT and Harvard grad student cohort who took my human-AI interaction seminar during the depths of the pandemic.
Unfortunately, given the conflict between CHI and our semester timeline (and a particularly inflexible gate agent at our connecting airport in Munich), I missed PaTAT's lead author, Simret, knock her presentation out of the park. While Toby assured me it went great, I was disappointed not to be there since it’s one of my favorite recent papers: I’m especially proud of our focus on human learning with increases in machine learning accuracy as an incidental benefit. Human learning as distinct and valuable alongside machine learning is something we think a lot about within Harvard HCI; Zana Buçinca, my colleague Krzysztof Gajos’s senior PhD student, led a NeurIPS workshop essay with a similar theme.
While I thoroughly enjoyed chairing the session on Making Sense & Decisions with Visualization, my favorite session was a type of session I’d never been to before: AltCHI. You really don’t know what to expect; I walked into a man moderating a panel discussion composed of videos of himself playing different roles to illustrate the set of competing arguments made in their submission Unsocial Robots: How Western Culture Dooms Consumer Social Robots to a Society of One. This was followed by a really thought-provoking paper questioning the doctrine of simplicity in user interface design and a playful transnational paper called “The Internet of Bananas: Critical Design and Playfulness for Citizen Sensing and Electronic Literacy." However, the most immediately useful paper was the final one in the session: "The Systematic Review-Lutron: A Manifesto to Promote Rigour and Inclusivity in Research Synthesis." The presenting author called out what I had only registered implicitly: systematic reviews, when done well, provide real value, and yet in HCI the notion of what is methodologically appropriate is unclear and so is the pathway to publication. As someone who has tried to write such a paper, I cannot wait to dive back into that authoring process after reviewing her recommendations for HCI-appropriate methodology. (And maybe endorsing her call to open this up as a real valuable contribution to be recognized in the SIGCHI community.)
In between sessions, I had the pleasure of chatting with my long time friend Petr Slovak, whose paper taught me about scaffolding transformative reflection; Majeed Kazemitabaar, who told me about his CHI paper on how AI code generators affect novice programmers, and Daniel Buschek, whose IUI-turned-invited-TIIS conceptual framework paper on human-AI interaction is a nice recent example of a framework paper.
The kind of predictably unpredictable connections that one makes at a good conference was exemplified by my time waiting in line to enter the German HCI party. I never made it into the party itself because the line was so long—it snaked its way around the building and down the street—but I did start chatting to someone with expertise working with people with different cognitive abilities. I asked her about the latest research on interfaces for neurodivergent folks, akin to a CS professors’ recent tweet about how they requested ChatGPT to write an “ADHD-friendly summary” of an email, and mused about whether Hendrik’s Accessible Text Framework might be flexible enough to guide the design of such interfaces. (It was initially designed to specifically support those who are functionally illiterate for any reason, e.g., cognitive differences or relative unfamiliarity with the language.)
On the final day of the conference, I revisited my materials for the Human-AI Interaction course, and realized, of course, that I needed to update how I described my thoughts on the matter. Writing is thinking, after all, and writing out my lesson had gotten me to reappraise earlier decisions I’d made about how various arrows in my slowly evolving diagram are placed and annotated.
Other Harvard HCI folks who attended
Glassman Lab Postdoc
recent graduate advised by Prof. Lydia Chilton, Columbia HCI
IIS Lab Senior PhD student
Postdoctoral Fellow, Harvard Center for Research on Computation and Society