Connect
CeRI_new_short

Projects

Designing the participatory experience

Interface design of online participatory systems can subtly (and not so subtly) influence whether and how members of the public will engage with policymaking's deliberative process. This project is seeking design elements that promote more informed, more broad-based, and deeper participation on Regulation Room and other online deliberative platforms. One current study looks at how signaling community norms and expectations through action prompts may enhance participation. Conducted through Amazon Mechanical Turk, this study tests three conditions: generic prompt, community norms specific prompt, and content specific prompt. Future research will look into how smaller, simpler steps that newcomers might initially take may reduce the apparent effort of engaging.

We investigate those questions by using a large scale controlled experiment focused on deliberation of an actual campus policy change. The practical goal of the project is to test platform features and facilitation procedures for collaborative drafting of policy inputs by the members of the public.

Drafting Room Experiment

The Drafting Room experiment is testing the boundaries of effective online civic engagement. Is it possible to go beyond soliciting feedback from individual members of the public and help them move towards collaboratively producing effective policy inputs? Specifically, we are focused on three main questions:

  • What psychological and experiential factors predict different levels of civic engagement in online deliberation of policy?
  • What effect does engagement in online deliberation of public policy have on people’s perceptions of the decision-making processes and institutions?
  • What effect do different facilitative interventions have on co-production of policy inputs in an online environment?

We investigate those questions by using a large scale controlled experiment focused on deliberation of an actual campus policy change. The practical goal of the project is to test platform features and facilitation procedures for collaborative drafting of policy inputs by the members of the public.

Framing for participation

One of the main challenges with online civic engagement in policymaking is breaching the wall of skepticism and distrust, which has become common in developed democracies. There is a significant body of literature addressing this issue within the context of traditional, mass media, but there is still a lot to unpack in the context of personalized, new media environments. Tackling this challenge, this study looks into the framing of calls for engagement on social media studies. Currently we are conducting an experiment using Facebook's advertising engine. Controlling for gender and political orientation, we present Facebook users with either thematically or episodically framed calls to action crafted with less than 90 characters.

Predictors of effective online civic engagement

Policymaking bodies have limited financial, human, computational, and temporal resources for recruiting members of the public to participate in online deliberations surrounding rulemaking processes. Thus in order to make the most efficient and effective use of these resources, this project is an effort to improve outreach strategies by identifying people who are likely to be highly motivated and capable contributors. Its aim is to develop natural language processing techniques that can analyze text online to detect cognitive and experiential characteristics that are positively or negatively associated with a person's willingness and ability to participate effectively.

To date, experiments have concentrated on recruiting people from the social media platform Twitter by analyzing the text that Twitter users post. An initial experiment in Spring 2012 examined whether text similarity between rulemaking concepts and a Twitter user's bio, tweets, or some combination was correlated with that person's willingness to participate during an open comment period on CeRI's Regulation Room.

Current experiments continue to explore predictors of an individual's readiness for engagement. In particular, the focus is on developing methods for:

  • identifying topical expertise and interest according to online behavior and content
  • determining linguistic markers of psychological characteristics known to motivate engagement such as self-efficacy and certain personality traits.

Additionally, we investigate whether outreach messaging can be crafted to amplify and appeal to these interests and characteristics in order to be more persuasive, achieve better response rates, and elicit higher quality comments.

Regulation Room

Regulation Room is designed and operated by the Cornell e-Rulemaking Initiative (CeRI) and hosted by the Legal Information Institute (LII). The site is a pilot project that provides an online environment for people and groups to learn about, discuss, and react to selected rules(regulations) proposed by federal agencies. It expands the types of public input available to agencies in the rulemaking process, while serving as a teaching and research platform. Learn more.

Situated knowledge in policymaking

Technology-enabled civic participation in policymaking has become one of the most important e-government topics. As barriers to participation are lowered and more citizens are willing to engage directly with decision makers, sifting the flow of commentary to identify the most essential and useful information remains a significant challenge. This project is focused on developing Natural Language Processing (NLP) based solutions for extracting situated knowledge from public commentary on policy; it also aims to create tools for exploring various aspects of that knowledge. This work will help to make broad civic participation more effective in complex public policymaking at the federal, state, and municipal levels.

Started in 2012, our work so far has focused primarily on conceptualizing the value of situated knowledge in policymaking activities and building an annotated corpus for NLP analysis. This work resulted in a number of publications and a corpus that allowed for initial NLP experiments. Currently, we continue to expand the annotated corpus and build on the preliminary results of our NLP experiments and have started exploring the best ways to visualize situated knowledge for the purposes of more efficient and effective management of online public consultations. This work is funded by the Jacobs Technion-Cornell Innovation Institute Research Project Award and most recently through an NSF grant.

Unsubstantiated Claim Detection

With the continued advancement of information technology, we are experiencing an explosion of user participation in the web environment. In order to efficiently manage the growing amount of information, this project aims to automatically evaluate the quality of user generated texts, such as reviews and comments, by means of determining whether each claim is accompanied by substantiation. A working assumption here is that user generated texts that consist of substantiated claims are of better quality than those that contain unsubstantiated claims.