#MoreThanCode is a Participatory Action Research (PAR) project. The first phase of the project consisted of interviews with practitioners and a literature review of work being done with technology to advance social justice and/or the public interest. Findings from this stage informed selection of a diverse set of organizational research partners for the second, expanded phase of research. In the second phase, all project partners worked together to develop the research questions, study design, data collection, data analysis, conclusions, and recommendations. This report summarizes outputs from both phases of the research process. Our study focused primarily on practitioners in the United States.
Project partners helped guide the research design, implementation, and analysis, and engaged their communities and networks in data collection. We sought organizations that play a significant and active role, touch and represent a key segment of the ecosystem, and that were willing and able to commit to help guide the research design, implementation and analysis. We were also committed to ensure that partners represented a diversity of perspectives, and sought to include groups that, despite their extensive work, are not often included in agenda-setting and research. To achieve this, we considered the following criteria:
- Identity (race, class, gender identity, sexual orientation, disability, age, and other factors) of individuals who would work on the project;
- Community (who the orgs prioritize working with);
- Job type/roles (developers, designers, policy advocates, community organizers, educators, researchers, “field builders”);
- Organization type/Sector (government, private sector, civil society organizations); • Organization size;
- Pathways (where folks came from);
- Design and development approach (do they use a collaborative or participatory design approach? Do they use best practices in software development, such as F/LOSS & Open Source, Agile, Lean, User Centered Design?);
- Political analysis (do they use an intersectional analysis of race, class, gender, sexual orientation, disability, immigration, and so on?);
- Field builders (people who are intentionally doing “field-building” work).
Based on these goals, our selection process was as follows: coordination team members from RAD and OTI developed a shared initial shortlist of individuals and organizations, with input and feedback from Code For America and NetGain. We reviewed the shortlist for diversity of experience according to the criteria above, then extended it through several rounds of review, additions, and reprioritization. From this expanded list, each person on our core team then nominated up to 10 organizations; we tallied nominations and then met to come to consensus on a list of 10 organizations and 10 alternates.
During the first stage of the project, we reviewed relevant literature, including scholarly, practitioner, and funder reports focused on related fields. Topic areas included civic tech, open data, appropriate technology, community technology, predictive analytics and algorithmic decision-making, education and talent pipelines, diversity and inclusion initiatives, participatory design methods, values in design, technology’s role in social movements, public interest in the context of public interest law, and media justice. Our goals were to identify and summarize key texts, concepts, and arguments within and between these fields. Notes from this review are available here: T4SJ Lit Review for Kickoff.
We interviewed 109 people, using a modified snowball sample:28 interviewees were nominated by project partners, coordination team members, and by project advisors. In addition, we asked each interviewee to recommend additional people in the field to interview. As we proceeded through our master list of potential interviewees, we regularly reviewed the demographics of interviewees to date, and continually modified outreach in order to maximize diversity along lines including gender, race/ ethnicity, geographic location, and sector (government, private, nonprofit). We focused on practitioners in the United States, although a few interviewees reside and work elsewhere. Demographics of our interviewees (and focus groups) are described in the Demographics section of the report. We used a semi-structured interview guide29 (available at http://morethancode.cc/assets/resources/interview-guide-II.pdf ) for all interviews, and recorded interview audio for transcription. Immediately after each interview, the interviewer(s) wrote up notes about the interview and key takeaways. Key takeaways from all interviews are available at http://bit.ly/t4sj-interviews-keytakeaways. All interviewee names have been changed for privacy purposes.
We conducted 11 focus groups, ranging from as small as six to as large as 33 people per group, with a total of 79 focus group participants. The goal of the focus groups was to gather particular communities to discuss, in a structured way, people’s definitions of the field, pathways into the work, supports and barriers, and visions for the future. Focus groups were conducted in-person and, in some cases, via video chat. All focus groups used a semi-structured Focus Group Guide that mirrored interview questions (available at http://morethancode.cc/assets/resources/T4SJ-Focus-Group-Guide.pdf).
We recorded and transcribed audio of all focus groups and interviews, and replaced all participants’ names with pseudonyms for privacy purposes.
We asked all research participants to complete a demographic questionnaire before or after interviews and focus groups. Of the 189 research participants, 121 completed the questionnaire. The questionnaire was intended to gather individual and organizational demographic information about research participants. We asked participants about their race and ethnicity, gender identity and sexual orientation, age, highest education completed and specialization, personal income, and disability status. We also asked practitioners about the sector they work in, how they define themselves in relation to their work (artist, tech project manager/coordinator, developer/coder, designer, educator, funders, policy advocate, researcher, and journalist), and the positions they hold (such as Director/CEO/Founder, manager/ supervisor/leadership role, fellow, consultant, volunteer, or worker-owner/member). Finally, we gave our research participants the option to receive their interview transcript and audio recordings. Our questionnaire instrument is available at https://www.surveymonkey.com/r/t4sj-questionnaire.
We provided all participants with a worksheet containing terms related to the field, such as “civic tech,” “community technology,” “public interest tech,” and so on. We asked them to circle terms they identified with, place question marks next to terms they were not familiar with, and cross out terms they felt did not belong. We also asked them to write in missing terms that they felt were important. The terms worksheet is available here. This process was used to not only collect data but also to spark conversations about why and how certain terms and frames are used. Through this process, we a created a list of 252 terms that study participants use to describe their work. These terms can be found at http://bit.ly/t4sj-terms. These terms were later used to query secondary data sources (see below).
Secondary Data Collection and Analysis
We leveraged a variety of secondary data sources as part of our research process:
- US IRS Form 990 data provided by the Nonprofit Open Data Collective;
- Job Listings from Indeed.com and Idealist.org;
- Existing reports on diversity in adjacent sectors;
- Existing reports on the funding ecosystem.
Additionally, we synthesized information from interviews and desk research to create resources such as the Organization List and the Educational & Fellowship Programs List.
For both the IRS Form 990 data and the job listings, we used the list of 252 terms provided by study participants to describe their work (http://bit.ly/t4sj-terms) to search and filter for relevant organizations and job listings.
To enable our analysis, the IRS Form 990 data was imported into a PostGRES database to allow for fast querying across the over 450 million records in the database. The sequel queries used for this analysis are available on GitHub. The Nonprofit Open Data Collective is also working towards providing access to the entire data set for further research. We have provided access to the subset of data we analyzed here.
In order to access the job listings data, we built a query tool and scraper, which are available on GitHub. For Indeed.com, we registered for access to their API to explore the viability of creating a job website to refer job seekers to opportunities in the ecosystem. This allowed us to query the Indeed API for job postings using the participant-provided terms. Idealist does not have an API, but the website is backed by the search indexing tool Algolia, which made it possible to get structured search results. All of the data and metadata from the job postings were stored in a PostGRES database to enable analysis and aggregation. Finally, we used Joblint, a Natural Language Processing library, to test and score job descriptions for issues with sexism, racism, culture, and expectations. We caution that there are some false positives; for example, jobs focused in gender work, e.g. “Women’s Rights,” score higher on “sexism.” The job listings can currently be explored here: http://jobs.morethancode.cc.
We held two in-person partner convenings. The first was a Research Design convening in March 2017 designed to: (1) build solidarity, relationships, shared project values, and vision; (2) refine and confirm research goals, focus, desired outcomes, and methods; (3) develop project data privacy and retention agreements and policy; (4) develop a project implementation plan; (5) define project advisory roles and nominate potential project advisory board members; and (6) understand what social justice means to each of us. Key outputs from the convening included a refined set of project goals, research questions, prioritized audiences, and outputs that established the project’s research design and methods.
The second convening, the Research Analysis retreat, was held in October 2017. The purpose of this convening was to (1) build solidarity, relationships, shared project values, and vision; (2) review and develop a shared analysis of our research, such as key findings and limitations/gaps; (3) develop recommendations for priority audiences; (4) decide how to frame the research; (5) finalize data privacy, retention, and use agreements and policy; (6) develop a research dissemination plan; and (7) project evaluation (to date). Partners reviewed and discussed the themes that were emerging from the data, and provided their analysis and recommendations to help shape the findings presented in this report. Reflections of these discussions can be found in the T4SJ Convening II Annotated Data Gallery.
Since there is no agreed upon definition of the field boundary, and no widely accepted universe of participants in the field, it was not possible to conduct a true random selection of individuals or organizations. Therefore, as with any non-random sample, our findings should not be assumed to be representative of the entire field. We especially urge readers to exercise caution when interpreting the demographics of our interviewees and focus group participants: we specifically sought to include women, People of Color, LGBTQI folks, and others who are not well represented across the broader technology sector. Therefore, the demographics of our study participants do not necessarily represent the demographics of any of the of subfields we discuss in the report. Many participants from marginalized communities related that they feel like outliers; unfortunately, for the most part people working in this ecosystem (with the possible exception of tech for social justice and community technology subfields) over-represent white cisgender men with high levels of education, as in the broader tech sector.
B. Anonymity & Data Protection Policy
To protect research participant privacy and confidentiality, the coordinating organi- zations and the research partners established processes to document, manage, and store participant data. This included signed MOUs, written informed consent, tightly controlled permissions for access to recordings and transcripts, and anonymization of all transcripts prior to analysis. Our policy and process was as follows:
- MOUs signed by team members included agreement to to uphold privacy, confidentiality, and full informed consent of research participants.
- We provided all interview and focus group participants with an Informed Consent Agreement outlining the project purpose, risks and benefits of participating, confidentiality parameters, and voluntary participation. We asked research participants to provide recorded verbal consent at the start of their interview or focus group.
- We limited access to documents containing research participant data to coordinating organizations and research partners. In some instances, data was limited by individual or organization. For example, (1) each interviewer had their own raw data storage folder to store original recordings and transcripts prior to anonymization; (2) access to the demographic questionnaire data was limited to the RAD team.
- Interview and focus group audio transcripts were de-identified for analysis by anonymizing names of participants, organizations, and mentions of persons or organizations that may easily identify the participant or their organization. Access to the document tracking anonymization was limited to specific persons on the research team.
C. Additional Research Outputs
Data Galleries We produced three Data Galleries, or printable slide decks, of key quotes, findings, and data visualizations for use at face-to-face workshops and project convenings, as well as for online circulation. - Data Gallery I - Data gallery II - Data gallery III
Practitioner Profiles We produced six practitioner profiles, in a journalistic style that describes each person’s work, their career path, and challenges and opportunities they faced along the way. These are available at http://morethancode.cc/tags/#Practitioner+Profile
Key Interview Takeaways We wrote short summaries of key takeaways from all interviews. These are available in this standalone doc: http://bit.ly/t4sj-interviews-keytakeaways.
Data Visualizations A gallery of interactive data visualizations, including demographic data of project participants, IRS form 990 data of organizations in the field, relative term frequency in job listings from Indeed, and more can be found here: https://public.tableau.com/profile/t4sj#!/
Powerful Quotes After importing anonymized interview transcripts to Dedoose, we coded all transcripts according to our codebook. Coders marked particularly powerful quotes in each category. These were later exported from Dedoose, cleaned up and used as slides in the data gallery, and/or added to this standalone T4SJ Quotes document: http://bit.ly/t4sj-powerfulquotes.
Organizational Database We developed a database of information about more than 700 organizations and projects, available both as a spreadsheet and via a searchable web interface. We initially seeded this with the organizational list from the Civic Tech Field Guide (available at http://bit.ly/organizecivictech), then added new organizations that came up in project interviews, focus groups, and workshops. The database is searchable by type of organization, sorted into the top level categories that emerged from our research process, as well as by variables such as “Majority PoC” and/or “Queer.”
Nonprofit Database In the second stage of research, we decided to build a more comprehensive database of relevant organizations by using U.S. IRS Form 990 data provided by the Nonprofit Open Data Collective. We searched through over 450 million records in that database for relevant organizations, by using a list we compiled of 252 different terms that study participants use to describe their work (the terms list can be found here: http://bit.ly/t4sj-terms). The search process returned 91,058 unique organizations (foundations and nonprofits), who use one or more of our search terms somewhere in their 990 Forms, e.g. in mission statements, program descriptions, or grant descriptions. However, some of the terms provided by practitioners are quite broad, and apply to many organizations that may or may not specifically engage in technology work (for example, “criminal justice”). We classified these broad terms as “Other.” When we exclude organizations that we classified as “Other,” we are left with 39,000 nonprofit organizations who included one or more of our search terms in their tax forms. We encourage others to further explore and analyze the data here.
Educational Programs Spreadsheet There are growing numbers of university departments, centers, labs, and courses of study dedicated to the confluence of technology and society. We assembled this publicly editable spreadsheet of educational programs, fellowships, informal learning environments, bootcamps, meetups, and online education resources: http://bit.ly/t4sj-programs.
Jobs Database Job listing provides an important lense on the way that employers think about and describe this work. We created the following jobs database in part as a research tool (to help us understand how employers talk about the field) and also as demo design for a job board that might be useful to help more people enter and advance within the field: https://jobs.morethancode.cc.
Terms List A spreadsheet of all terms mentioned by practitioners to describe the work they do. Includes tabs for full list, count of participant identification with terms, top-level categorization codes, and counts of orgs that use terms in IRS form 990: http://bit.ly/t4sj-terms.
Research Instruments Throughout the project, we made all research instruments publicly available, including our final semi-structured interview guide and focus group guide.
List of Additional Research Outputs
Stage 1 Research Outputs
- Taxonomy of types of public interest tech work people are currently doing and findings from the first nine interviews
- Annotated bibliography
- Notes from the Code for America Summit, and the New America Growing the Public Technology Ecosystem event
- 23 interview transcripts
- Data gallery I, from the first round interviews: http://bit.ly/pit-cfa-gallery
- Semi-structured interview guide, round 1
- Major Themes from first 9 Interviews doc
- List of projects, organizations, and companies doing public interest technology work (note: this spreadsheet is being fed directly into the website)
Stage 2 Research Outputs
- 109 interviews and transcripts
- 11 focus groups, a total of 79 focus group participants, notes and transcripts.
- 6 Practitioner Profiles: http://morethancode.cc/tags/#Practitioner+Profile
- Revised Interview Guide: http://morethancode.cc/2017/08/23/interview-guide.html
- Revised Focus Group Guide: http://morethancode.cc/2017/08/24/focus-group-facilitation-guide.html
- Job board: https://jobs.morethancode.cc/
- IRS form 990 data browser: https://public.tableau.com/profile/t4sj#!/vizhome/T4SJIRS990/SummaryTableCountsofOrganizationsbyTypeperCategory
- Secondary Data Visualizations: https://public.tableau.com/profile/t4sj#!/
- Educational Programs Spreadsheet: http://bit.ly/t4sj-programs
- Data gallery II: http://bit.ly/t4sj-datagalleryII-annotated
- Key Interview Takeaways: http://bit.ly/t4sj-interviews-keytakeaways
- Powerful Quotes: http://bit.ly/t4sj-powerfulquotes