Effect of Code for America Brigade for momentum of OSS activity

Created 2016-10-07By BeaconLabsVersion 1.0.0

Key Points

The LocalWiki project saw significant progress during a single-day event, with 633 pages edited, 100 maps added, and 138 new photos uploaded.

Background

Code for America (CfA) is a nonprofit organization based in San Francisco, with a mission to make city governments more efficient, transparent, and responsive to resident needs through technology.

Each year, CfA organizes a fellowship program in which small teams of selected programmers, designers, and other technologists partner with local governments to build web and mobile applications that address community issues.

However, in Fall 2011, CfA faced two major challenges:

  1. There was a lack of infrastructure to coordinate and manage volunteer contributions from outside the organization. Despite receiving over 550 fellowship applications and having more than 10,000 fans and followers on social media—an “enthusiasm surplus”—CfA was unable to effectively leverage this support.

  2. While CfA planned to significantly expand the fellowship program over the next 3–5 years, the current program structure was seen as unsuitable for scaling to smaller cities and towns.

To address these challenges, the foundation for a new initiative, “CFA Everywhere,” was laid in Fall 2011 as a class project in UC Berkeley’s ISSD course. The proposal received funding in early 2012 and was renamed Code for America Brigade, inspired by fire brigades, with the goal of transforming isolated civic hacker efforts into a broader, integrated movement. It aimed to be an open-source platform connecting CfA’s activities with the wider civic hacker community.

Analysis Methods

Dataset

  • Interview Data:

    • Volunteers in open source development projects: Interviews were conducted with volunteers involved in both technical (open source development, hackathons) and non-technical (e.g., Habitat for Humanity) projects.
    • Code for America fellows and staff: Interviews were conducted during the project's early stages.
  • Social Network Data:

    • CfA fellowship applicants and Twitter followers: Analyzed to understand their skills and interests.
    • Twitter stream data: Live tweets using the #codeacross hashtag during the “Code Across America” events and post-event tweets were used to generate a world map of activity.
  • Event Participant Data:

    • Code Across America hackathon surveys: Surveys were developed and used to gather participant intentions and feedback.
  • Site Usage Data:

    • Google Analytics: Provided insights on Brigade site traffic, visitor count, time-on-site, page views, bounce rates, return visitor ratios, and referrers.
  • User Feedback Data:

    • Unofficial feedback: Gathered from users via the Brigade mailing list and other channels after launch.
    • Official surveys: A brief open-ended survey was conducted after the launch of Brigade v.1 to collect user experience feedback from registered users.

Preliminary Research and Insight Extraction:

  • Open Source Development Interviews: Found that the top motivator for volunteers was “becoming part of a community of people with shared interests,” leading to the insight that the platform should be designed not as a project management tool but as a community organizing platform.
  • Social Network Analysis: Revealed that many CfA followers and applicants were interested in non-coding skills such as graphic design, UX, project management, and community work, indicating the platform should be designed not only for developers but for a diverse range of volunteers.
  • Fellow and Staff Interviews: Highlighted that CfA staff lacked bandwidth to manage external projects, leading to the insight that CfA should act as a “catalyst for unleashing the potential of supporters nationwide,” focusing on enabling self-organization and resource sharing.

Shift to Lean Startup and Agile Development:

  • Transitioning from an academic proposal to an actual development project required defining the requirements of a Lean Startup-style Minimum Viable Product (MVP).
  • Agile Inception Event: Held at the CfA office in January 2012, where stakeholders gathered to define MVP “user stories” using index card brainstorming and dot-voting techniques.

Post-Launch Evaluation and Improvements:

  • Analysis of Unofficial Feedback: Post-launch feedback helped identify areas for improvement (e.g., lack of content, unclear app status, absence of editing tools).
  • User Survey Analysis: Open-ended responses from a five-question survey were manually parsed, categorized, and visualized in bar charts. Key findings included user excitement about connecting with others, the need for better support for collaboration, and demands for documentation, case studies, and tutorials.
  • Google Analytics Analysis: Tracked post-launch traffic and engagement, revealing that most traffic was direct and engagement was low (approx. 90% stayed less than five minutes, 85% had fewer than five return visits).

Result

  • Platform Status at Launch:

    • brigade.codeforamerica.org hosted 9 reusable civic apps and provided links and instructions for deploying local instances.
    • Over 250 registered users, more than 80 locations, and approximately 40 Brigades were formed.
    • MVP features included sign-up via email or GitHub, profile creation, joining/starting Brigades, viewing apps, Brigades, people, and location detail pages, submitting/viewing challenges, and sharing via social media.
    • Based on an open design strategy—“don’t reinvent the wheel” and “don’t monopolize user interactions”—the platform integrated with third-party APIs like Civic Commons and Gravatar and recommended using existing tools like Google Groups.
    • Brigades were designed as purely virtual associations, allowing users to freely form groups, though this information architecture proved confusing for new users.
    • The app deployment checklist was simplified from its original concept due to documentation gaps and varied deployment processes, leading to increased burden on administrators.
  • Launch and Early Impact at SXSW:

    • On March 14, 2012, the Brigade platform officially launched during the SXSW Interactive keynote.
    • The launch was a significant success, with over 120 users from 45 locations signing up within the first 24 hours.
    • The February “Code Across America” hackathon series spanned 16 cities, boosting existing civic tech projects. For example, in Raleigh, NC, the LocalWiki project saw ~50 volunteers contribute 633 page edits, 100 maps, and 138 new photos—nearly doubling six months’ worth of progress in a single day. The events also helped forge connections among civic technologists across cities and build a support community around open source software.
  • Initial Feedback and Challenges:

    • Unofficial user feedback pointed out content gaps, unclear distinctions between "deployable" and "deployed" apps, and a lack of editing functionality.
    • Most communication between Brigade staff and civic hackers continued through offline channels such as email, conference calls, and Google Hangouts—useful for forming new relationships, identifying partners, and addressing urgent issues. The Brigade director noted that during early platform development, staff were expected to bridge functionality gaps through direct interaction.
    • The site wasn’t always the main driver of community activity. Instead, it functioned more as a symbolic record of connections between people, places, and projects.
    • Notable challenges included the lack of task tracking (relying only on GitHub links) and inadequate support for outreach and organization by fledgling Brigades.
  • Brigade v.1 Updates:

    • A site design update was deployed in early April 2012.
    • Navigation was clarified and restructured around “Applications,” “Brigades,” and “People.”
    • Due to resource constraints, the “Challenges” feature was temporarily shelved.
    • Informational content was added to explain site terminology (apps, deploy, Brigade).
    • A new “status” column was added to the list of deployed apps to better indicate progress.
  • Survey and Google Analytics Evaluation:

    • Survey results showed that civic hackers were excited to connect via the platform but also desired more connection support (e.g., communication tools, wiki spaces, mailing list features).
    • Feature limitations were found to contribute to reduced engagement among some users.
    • High demand was expressed for content expansions like case studies, tutorials, and wiki/forum features.
    • Google Analytics showed a major spike in traffic during the SXSW launch, followed by a slowdown and relatively low user engagement (short visit times, low repeat visits). There was also room for improvement in social media referrals.

Based on these findings, recommendations for Brigade v.2 included enabling user-generated content (e.g., wiki-style app “recipes,” technical updates), enhancing content discovery via location-based features (e.g., Brigade maps, filtering by people), increasing visibility of social sharing features, and launching a Brigade blog.

Citation

https://www.ischool.berkeley.edu/sites/default/files/student_projects/karimcglynn_codeforamericabrigade_finalreport.pdf

Results

  • Mixed
    Construction and Deployment of 'Code for America Brigade'
    Generation of 'momentum' in existing civic tech projects (e.g., number of page edits, map additions, and photo uploads on LocalWiki).

Methodologies

  • Interview, Survey, Google Analitics

Data Sources

  • Interview, Survey, Google Analitics

Tags

oss