US Census 2020: count every person…

 

Scope

Openoakland volunteer brigade
February-March 2020

Team

Thomas Deckert
Jillian Hansen-Lewis
Erin Nedza
Mike Ubell
+ Code for America

Key Skills

Design Research
Digital Prototyping

 

…because every person counts.

 

The US Census, completed every ten years, strives to count every single person living in the United States. The Census results determine where billions of dollars of federal funding go, the number of congressional seats, and more. However, many groups are regularly under-represented in the Census, like those who are homeless or who do not speak English.

This is the first ever time the US Census has been available to complete online; this presents new challenges to reaching complete representation. Code for America asked the OpenOakland brigade to help out.

 
 

Supporting those who need it most

In January of 2020, community centers across the nation were preparing to host Census Events, where staff would be on hand to assist people in filling out the Census. Our mission, in partnership with three other brigades around the country, was to determine how community centers could support people from underrepresented populations in completing the Census.

To do this, we needed to understand the challenges people faced while filling out the Census.
The plan: build a prototype of the digital version of the Census, and test both the digital and paper version with specific under-represented communities. The goal of our testing was not like traditional usability testing, to improve the product. Rather it was to document the common obstacles for people filling out the Census, and compile a report on how staff could best support these Census takers.

 
 

Our goal was to enable community centers to support underrepresented groups in completing the Census.

 
 

Key STEPS

  • Creating a testing plan

  • Recruiting and scheduling tests

  • Prototyping

  • Usability testing

  • Synthesizing insights

  • Compiling final report

drawing of helping someone take the Census
 

The prep work


At the start of February, Code for America, the OpenOakland brigade, and the brigades in Cleveland, Chicago, and Miami put together a Census testing plan. This focused on the goal of the project, the populations to test with, how many tests to run, proper testing methodology, timelines, and more.

I contributed to establishing best testing practices, setting the number of tests we would aim to conduct, and writing a screener survey for recruitment.

OpenOakland organized two test sessions, one with members of a literacy and technology skills group on the digital prototype, and one with non-English speakers on the paper Census. I participated in the Oakland ‘digital divide’ usability tests.

 

Prototyping in tandem


Our first step, after putting together the research-and-testing plan for the project, was to build the digital prototype. Jillian, Thomas, and I had four days to collaborate on this before the first usability test was scheduled.

The digital version of the US Census had already been built and no changes would be made to the UX or copy; however we also could not be granted access to the website for usability testing.

Instead, we had a video demo of the Census website, from which we took screenshots. We settled on Figma as our prototyping software, to collaborate in real-time and speedily create a clickable prototype by linking together screenshots. However, once we started uploading the images into Figma, it became clear these were far too low-quality. We would need to manually build each screen over the website images; this would take considerably more time, but we hoped would yield more functionality and fidelity in the prototype.

 
 

Image quality was too low, so we rebuilt each page of the Census.

 
 
Prototype pages were built on top of low-quality images of the actual Census website, because text was not clearly legible and color balance was poor. We recreated all the text and icons, using reusable components and a style guide for fonts.

Prototype pages were built on top of low-quality images of the actual Census website, because text was not clearly legible and color balance was poor. We recreated all the text and icons, using reusable components and a style guide for fonts.

 


After initial icons and header/navigation components were made, I put together a style guide from which we could pull component navigation buttons, radial button, checkboxes, color and font styles, and other features. After this was completed, the three of us divided up the work building pages; as this neared completion, I served as the proofreader.

After a weekend of collaborative building, we began to link screens. I completed the bulk of this work over Monday, with help and testing from Jillian and Thomas.

 
 

The BUmps in the road

This was when we discovered several challenges with the prototype: additional functionality we were hoping to add to our hand-built screens would be impossible or would require time-consuming work-arounds. For each of these challenges that arose, we experimented with solutions and made decisions as a group.

 
 

Challenges arose as we prototyped, such as how to enter text in form fields; we developed solutions and shifted our process as a team to implement them.

 
 


Initially we had hoped to use overlays for changing multiple choice answers to a selected state. However, in Figma, overlays prevent interactivity with the rest of the screen—so you could not select an answer and then click ‘next’. Our solution to this, though time-consuming, was to build screens showing the selection of each multiple-choice answer—and even more screens for questions where users could select multiple answers at once. On some Census questions, there was simply not enough time to build a screen for every answer, given that there were over a dozen choices (15 factorial for the race question!).

Figma also does not allow you to type within the prototype, for example adding your name to a form field. To mimic this interactivity, I built screens that appear to auto-fill information when a form-field is clicked. However, this meant we had to leave in dummy information that would be the “identity” of our prototype testers.

These choices had interesting ramifications during usability testing.

 
Clicking in the Census ID form field, as one would do before typing in it, automatically fills in a “dummy” answer. Because we couldn’t prototype filling in form fields, we mimicked the interactivity. On the next page, a dummy address is already pro…

Clicking in the Census ID form field, as one would do before typing in it, automatically fills in a “dummy” answer. Because we couldn’t prototype filling in form fields, we mimicked the interactivity. On the next page, a dummy address is already provided for the test participant.

 

Real people, real insights


OpenOakland conducted “digital divide” usability tests with five adults at the public library who described themselves as not having regular access to or a high level of competence using computers.

I moderated two tests with Erin as my note-taking partner. We prepped participants and explained that functionality was limited, so rather than typing their own information into the prototype, they would “assume they identity”—the name, address, and relationships—that showed up on the screen. However, they should answer questions accurately whenever possible.

 
Before the usability tests begin, we prepared the prototype and screen-recording software on our computers, testing everything to make sure it was running smoothly. We also brought candy to thank our participants.

Before the usability tests begin, we prepared the prototype and screen-recording software on our computers, testing everything to make sure it was running smoothly. We also brought candy to thank our participants.

 

Dealing with Identity

The Census begins by asking for a Census ID number (this is linked to a home address), which people receive on a card in the mail. We did not have an example of this card for participants, and participants were confused by this step, thinking they needed to enter their SSN or another similar number.

Our first participant also struggled with their “dummy identity” that resulted from our choice to have name and address information auto-fill into text fields. It was not intuitive to answer some questions with real information and some with made-up information.

 
 

Participants struggled to answer questions accurately while going along with the dummy identity in our prototype.

 
 


At the conclusion of the first test, I realized we could improve both of these issues and the experience for our participants: I quickly drew up an “ID card” on a piece of scratch paper for test-takers to keep in front of them. I wrote down the dummy name, address, and housemate name on the card, followed by the Census ID number. Instead of having to retain these details mentally, participants could use this card as a guide.

 
Having a card with the “dummy information” that participants would see in the prototype helped reduce cognitive load and gave them a clearer picture of their task, managing both real and false answers.

Having a card with the “dummy information” that participants would see in the prototype helped reduce cognitive load and gave them a clearer picture of their task, managing both real and false answers.

 


Our second participant was much more comfortable throughout the test, because they had a clearer mental model of their task, and it felt familiar, like playing a game with an avatar: You are now John Smith. Your mission is to fill out the Census accurately using the information at hand!

Initially we didn’t know if the confusion around the Census ID number came from us not having the card available, or from the wording and format of the Census website. Although having our makeshift ID card eased much of the confusion around the Census ID number, it still caused some uncertainty; we made sure to include this in our recommendations report.

 
 

Was the source of confusion the missing ID card, or the language used on the Census?

 
 

An example of the Census ID card that arrives in the mail.

 

The accuracy problem

Because they were not always filling in their real information, participants soon began to feel that they could click any answer, and that the accuracy was irrelevant. Some began to rush through the prototype, reading questions only part way, then clicking an answer without sharing their thoughts out loud. We faced a serious challenge with participant engagement and gaining accurate insights from the test.

This served as an important reminder about usability testing in general: the test needs to be properly geared toward the type of information you want to get out of it.

 
 

Usability tests must be properly geared toward the kind of information you want to learn…but our prototype was at odds with our goal.

 
 


What we wanted to test was comprehension: where Census-takers were confused by the language, as well as format. This goal was at odds with the fact that participants could not always use their real information to demonstrate understanding. In order to counteract this, we tried to emphasize before and during the test that what we wanted to learn from it. More importantly, however, was our ability to ask participants almost-constant questions about what they were doing.

When participants struggled to explain their thoughts out loud, or selected an answer that seemed inaccurate, we dug deeper by having them explain the last question in their own words and what action they had taken. We would ask opened questions such as, “Are there ways this question could be confusing to anyone else? How might someone misunderstand?” This allowed the participants to explain any confusion they did feel, without admitting if they misunderstood.

 
A participant explains what she is thinking as she works her way through the usability test prototype. Thomas moderates and takes notes.

A participant explains what she is thinking as she works her way through the usability test prototype. Thomas moderates and takes notes.

 

The Bias Problem

Within minutes of beginning the first tests, we became aware of several biases within our prototype. All of us who had built the prototype are white, while all of our digital divide test subjects are men and women of color, primarily African American.

In creating dummy names for the “residents” listed in our prototype, we had stuck with the examples from the website demo video: “John Smith” and “Jane Smith.” These classic “default names” have a strong association with white, non-immigrant Americans—an air of neutrality that is actually not neutral at all. As soon as we were in a position of asking people of color to pretend to be “John Smith,” we felt uncomfortable with this choice of names. In addition to being racially non-neutral, the dummy name of the Census-taker was a classically male name, while we had both male and female participants.

We did not have time during the test to change the names across all screens, so unfortunately this had to wait until after that day’s tests were complete. We opted for the gender-neutral first names “Pat” and “Mel,” and the nouns “Cucumber” and “Pickle” as last names. By avoiding the usage of any traditional surnames and opting for common words, we avoided alienating testers of any background, and kept the reading level accessible.

 
 

We felt uncomfortable asking men and women of color to pretend their name was “John Smith.”

 
 


An additional bias we encountered in the Census, which could not be corrected because the website wasn’t going to receive any more changes, was in the order of the race options listed. Although it is common practice to list options in order of how common they are—the most people do the least searching—doing so here reinforces the majority/minority status of racial groups in the US.

Most people of color will not know how far down a long list their race will appear, if it appears at all. Therefore, a lot of reading and mental energy is required to look for the correct answer. A more equal solution for all might be an alphabetical list, which makes it easier to look for and locate familiar answers.

Other issues we encountered in testing included text visibility and being able to adjust text size on the screen; and the language reading level, which was often above the recommended difficulty for government/public websites.

 
Jillian, Thomas, and I outside the Oakland library after completing tests
 

The final Count

After tests were complete, we shared our experiences and discovered strong commonalities across our digital divide testing experience. We then reviewed test footage and wrote up a synthesis and insights document. These findings were then assembled by Mike Ubell and volunteers from other Code for America brigades into a recommendation report.

Unfortunately, community centers and local organizations around the country were disrupted from hosting many Census events due to the Coronavirus pandemic, and shelter-in-place orders that began in March in the Bay Area. However, data tracking the Census response indicates that, as of October 27th (after the Census was officially closed), about 80% of Census self-responses were done online, and 67% of total mail-address households had self-responded. The total count in 2010 was 74%. After non-response follow-up, all US counties are estimated to have an over 99% response rate, based on the number of existing housing units.

 
black and white outline of hands
 

Lessons learned

 

Being able to work on such an important project was an amazing experience. It was wonderful to get a glimpse into how many people are working behind such a huge effort, and how the Census Bureau was able to use local communities and people on the ground to improve the process and get feedback.

The usability testing I participated in was a powerful learning opportunity. It’s easy to learn theory about bias and poorly-designed tests, but it is often necessary to gain experiences with these first-hand to develop better practices and improve as a designer. Having had this experience, I now have a much better idea of what to look for and how to correct for issues early on when designing my next prototype and usability study. Although we encountered plenty of obstacles while we were prototyping and testing, I enjoyed problem-solving with this creative, flexible, thoughtful team.