CeRI Finale Short

Publications & Working Papers

Farina, C.R., Blake, C.L., Newhart, M., & Nam, C. (forthcoming). Digital Support for Enhanced Democratic Participation in US Rulemaking Digital Democracy. In C. Cuijpers (Ed)., Digital Democracy in a Globalised World.

The deployment of digital technologies in rulemaking has indeed increased the overall number of public comments. It is generally conceded, however, that the growing volume of comments does not represent the kind of meaningful participatory input that affects policy outcomes in this process.

In this chapter we examine these developments, explaining why the federal government's e-rulemaking strategy has been unsuccessful in addressing the barriers to effective citizen participation in rulemaking. We also note how digital technologies have enabled advocacy organizations to achieve a dramatic rise in the quantity of public comment, without a corresponding qualitative shift in existing disparities in influence over rulemaking outcomes.

We then describe an alternative approach to using digital technologies to help level the participatory playing field in U.S rulemaking. RegulationRoom is a web-based consultation platform that has facilitated public participation in six federal rulemakings to date. It is the primary research vehicle for CeRI (the Cornell eRulemaking Initiative), an interdisciplinary project at Cornell University that brings together researchers in law, conflict resolution, communication and computing and information science. The RegulationRoom project demonstrates that purposeful design of digital tools, and associated practices of human support, can lower public participation barriers and facilitate meaningful forms of broader citizen engagement in rulemaking. At the same time, however, the project underscores that not even well-designed civic engagement technology can elicit such participation without the investment of considerable effort, both by citizens and by those who seek their informed policy input.

McInnis, B., Centivany, A., Kim, J., Poblet, M., Levy, K. & Leshed, G. (2017). Crowdsourcing Law and Policy: A Design-Thinking Approach to Crowd-Civic Systems . Companion of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, Portland, Oregon, 355-361.

Crowdsourcing technologies, strategies and methods offer new opportunities for bridging existing gaps among law, policymaking, and the lived experience of citizens. In recent years, a number of initiatives across the world have applied crowdsourcing to contexts including constitutional reform, drafting federal bills, and generating local policies. However, crowd-civic systems also come with challenges and risks such as socio-technical barriers, marginalization of specific groups, silencing of interests, etc.

Using a designthinking approach, this workshop will address both opportunities and challenges of crowd-civic systems to develop best practices for increasing public engagement with law and policy.

McInnis, B. & Leshed, G. (2016). Running user studies with crowd workers . ACM: Interactions, 23(5), 50-53.

Crowd work platforms are becoming popular among researchers in HCI and other fields for social, behavioral, and user experience studies. Platforms like Amazon Mechanical Turk (AMT) connect researchers, who set the studies up as tasks or jobs, to crowd workers recruited to complete the tasks for payment. Crowd workers on AMT (called Turkers) are quick and easy to recruit for online studies, are cheaper than paying people to come to the lab, and can provide useful feedback on prototypes through user research. Plus, Turkers are considered more representative of the general (U.S.-based) population than the convenient undergraduate sample prevalent in academic research.

But behavioral and user studies on crowd work platforms can unearth challenges foreign to more traditional user research studies. For example, Turkers don't see themselves as study participants but rather as workers; they come to the AMT platform to do work and get paid, not to help with research. On the side of the crowd employer (called a Requester in AMT), it is easy to ignore Turkers; their work is mostly anonymous and the crowd work platform manages the labor arrangement and transactions, making it trivial to reject or even steal work. A poorly designed experiment, such as a broken study platform or faulty survey questions, is difficult to detect because of the lack of direct contact between the researcher and participants. Further, such studies are often carried out quickly: A large number of workers can be recruited in a short amount of time, making it difficult to detect problems in the study until after many participants have engaged with it.

Here, we report on the lessons we learned about conducting research with crowd workers while running a behavioral experiment in AMT. We discovered the gray area of being both a researcher and an employer, and learned through trial and error what it takes to be a responsible researcher dealing with a large participant crowd. We hope that other researchers interested in using crowd platforms for user studies and behavioral experiments can learn from these lessons about treating crowd participants ethically and collaborating with them toward good results for the researcher and meaningful participation for the worker.

McInnis, B., Murnane, E., Epstein, D., Cosley, D. & Leshed, G. (2016). One and Done: Factors affecting one-time contributors to ad-hoc online communities . Proceedings of CSCW 2016. 609-623.

Often, attention to "community" focuses on motivating core members or helping newcomers become regulars. However, much of the traffic to online communities comes from people who visit only briefly. We hypothesize that their personal characteristics, design elements of the site, and others' activity all affect the contributions these "one-timers" make. We present the results from an experiment asking Amazon Mechanical Turk ("AMT") workers to comment on the AMT participation agreement in a discussion forum. One-timers with stronger ties to other Turkers or feelings of trust for Amazon are more likely to leave more - but shorter and less relevant - comments, while those with higher self-efficacy leave longer and more relevant comments. The phrasing of prompts also matters; a general appeal for personally-reflective contributions leads to comments that are less relevant to community discussion topics. Finally, activity matters too; synchronous activity begets responses, while pre-existing content tends to suppress them. These findings suggest design moves that can help communities harness this "long tail" of contribution.

McInnis, B., Cosley, D., Nam, C., & Leshed, G. (2016). Taking a HIT: Designing around Rejection, Mistrust, Risk, and Workers' Experiences in Amazon Mechanical Trunk , Proceedings of the 34th Annual ACM Conference on Human Factors in Computing Systems (CHI'16). ACM, 2016. p. 2271-2282

Online crowd labor markets often address issues of risk and mistrust between employers and employees from the employers' perspective, but less often from that of employees. Based on 437 comments posted by crowd workers (Turkers) on the Amazon Mechanical Turk (AMT) participation agreement, we identified work rejection as a major risk that Turkers experience. Unfair rejections can result from poorly-designed tasks, unclear instructions, technical errors, and malicious Requesters. Because the AMT policy and platform provide little recourse to Turkers, they adopt strategies to minimize risk: avoiding new and known bad Requesters, sharing information with other Turkers, and choosing low-risk tasks. Through a series of ideas inspired by these findings-including notifying Turkers and Requesters of a broken task, returning rejected work to Turkers for repair, and providing collective dispute resolution mechanisms-we argue that making risk reduction and trust building a first-class design goal can lead to solutions that improve outcomes around rejected work for all parties in online labor markets.

Epstein, D. & Leshed, G. (2016). The Magic Sauce: Practices of Facilitation in Online Policy Deliberation , Journal of Public Deliberation: Vol. 12: Iss. 1, Article 4.

Online engagement in policy deliberation is one of the more complex aspects of open government. Previous research on human facilitation of policy deliberation has focused primarily on the citizens who need facilitation. In this paper we unpack the facilitation practices from the perspective of the moderator. We present an interview study of facilitators in RegulationRoom - an online policy deliberation platform. Our findings reveal that facilitators focus primarily on two broad activities: managing the stream of comments and interacting with comments and commenters - both aimed at obtaining high quality public input into the particular policymaking process. Managing the immediate goals of online policy deliberations, however, might overshadow long-term goals of public deliberation, i.e. helping individuals develop participatory literacy beyond a single policy engagement.

Epstein, D. & Blake, C. "Regulation Room," in Civic Media: Technology, Design, Practice, ed. Eric Gordon and Paul Mihailidis. Cambridge: MIT Press, 2016. 221-228.

Civic Media: Technology, Design, Practice takes an interdisciplinary approach to the questions: "How are digital technologies being used to foster connections between citizens and formal and informal public institutions? Can technologies alter public processes, change the way people interact with government, and deepen engagement with public life?"

Epstein and Blake's case study was one of 22 selected for inclusion from roughly 220 submissions. The article traces the features of the RegulationRoom platform, an online discussion site developed by CeRI (the Cornell e-Rulemaking Initiative) to support meaningful public participation in rulemaking - the process federal agencies use to make new health, safety, and economic regulations. Although rulemaking is one of the most formally open and participatory mechanisms in US federal policymaking, complexity of information and lack of knowledge about the process deter effective public engagement in the process. Epstein and Blake describe how the digital tools and human-supported interventions of RegulationRoom work to overcome these barriers.

Park, J., Blake, C., & Cardie, C., Toward Machine-assisted Participation in eRulemaking: An Argumentation Model of Evaluability , Proceedings of the 15th International Conference on Artificial Intelligence and Law (ICAIL'15). ACM, 2015. p. 206-210

Increasingly, government agencies and civic technologists have been using online tools to foster broader and better public participation in rulemaking-the multi-step process that federal agencies use to develop new regulations. However, participation by citizens who are not experts in this process-which emphasizes reason-giving and arguments over voting or statements of preference-has led to a growth of public input that is hard to evaluate. In contrast to arguments put forward by policy analysts, lawyers, and others with formal training, comments from non-experts rarely explicitly state the premises for conclusions or provide objective evidence for factual claims.

RegulationRoom , an online discussion platform developed by CeRI (the Cornell e-Rulemaking Initiative), works to counter the lack of participation literacy that makes it difficult for non-experts to participate effectively in rulemaking. This is primarily achieved through the use of moderators who press commenters for additional details and support. However, these efforts are highly resource intensive. As part of CeRI's interdisciplinary research efforts, researchers in Cornell's Department of Computer Science have been examining how to use natural language processing to help identify comments lacking support for arguments so that moderators can identify such comments and give feedback quickly.

Farina C., Newhart, M. & Blake, C. (2015). The problem with words: Plain language and public participation in rulemaking. 83 George Washington Law Review 1358 (2015)

This article, part of the symposium commemorating the 50th anniversary of the Administrative Conference of the United States, situates ACUS's recommendations for improving public rulemaking participation in the context of the federal "plain language" movement. The connection between broader, better public participation and more comprehensible rulemaking materials seems obvious, and ACUS recommendations have recognized this connection for almost half a century. Remarkably, though, the series of Presidential and statutory plain-language directives have not even mentioned the relationship of comprehensibility to participation-until very recently. In 2012, the Office of Information and Regulatory Affairs (OIRA) issued "guidance" instructing that "straightforward executive summaries" be included in "lengthy or complex rules." OIRA reasoned that"[p]ublic participation cannot occur … if members of the public are unable to obtain a clear sense of the content of [regulatory] requirements."

Using a novel dataset of proposed and final rule documents from 2010-2014, we examine the effect of the executive summary requirement. We find that the use of executive summaries increased substantially compared to the modest executive-summary practice pre-Guidance. We also find that agencies have done fairly well in providing summaries for "lengthy" rules. Success in providing the summary in "complex" rules and in following the standard template included with the Guidance is mixed. Our most significant finding is the stunning failure of the new executive summary requirement to produce more comprehensible rulemaking information. Standard readability measures place the executive summaries at a level of difficulty that would challenge even college graduates. Moreover, executive summaries are, on average, even less readable than the remainder of the rule preambles that they are supposed to make accessible to a broader audience.

We do find some bright spots in this generally gloomy picture, as some agencies (or parts of agencies) are doing better at producing readable executive summaries. We end with speculation about why efforts to "legislate" more comprehensible rulemaking documents persistently fail. We urge ACUS to pursue its commitment to broader rulemaking participation by studying agency practices in this area, with the goal of identifying best practices and making informed and practicable recommendations for producing rulemaking materials that interested members of the public could actually understand.

Farina, C., Kong, H., Blake, C., Newhart, M. & Luka, N. (2014). Democratic Deliberation in the wild: The McGill Online Design Studio and the RegulationRoom project. 41 Fordham Urban Law Journal 1527 (2014).

Although there is no single unified conception of deliberative democracy, the generally accepted core thesis is that democratic legitimacy comes from authentic deliberation on the part of those affected by a collective decision. This deliberation must occur under conditions of equality, broadmindedness, reasonableness, and inclusion. In exercises such as National Issue forums, citizen juries, and consensus conferences, deliberative practitioners have shown that careful attention to process design can enable ordinary citizens to engage in meaningful deliberation about difficult public policy issues.

Typically, however, these are closed exercises-that is, they involve a limited number of participants, often selected to achieve a representative sample, who agree to take part in an extended, often multi-stage process.

The question we begin to address here is whether the aspirations of democratic deliberation have any relevance to conventional public comment processes. These processes typically allow participation that is universal (anyone who shows up can participate) and highly variable (ranging from brief engagement and short expressions of outcome preferences to protracted attention and lengthy brief-like presentations). Although these characteristics preclude the kind of control over process and participants that can be achieved in a deliberation exercise, we argue that conscious attention to process design can make it more likely that more participants will engage in informed, thoughtful, civil, and inclusive discussion. We examine this question through the lens of two action-based research projects: the McGill Online Design Studio (MODS), which facilitates public participation in Canadian urban planning, and RegulationRoom, which supports public comment in U.S. federal rulemaking.

Epstein, D., Farina, C. & Heidt, J. (2014). The value of words: narrative as evidence in policy-making. Evidence & Policy. 10(2) (2014) 243-258.

**This article was recognized by Evidence &Policy as one of its most downloaded articles of 2015.

Policy makers today rely primarily on technical data as their basis for decision making. Yet, there is a potentially underestimated value in substantive reflections of the members of the public who will be affected by a particular regulation. Viewing professional policy makers and professional commenters as a community of practice, we describe their limited shared repertoire with the lay members of the public as a significant barrier to participation. Based on our work with Regulation Room, we offer an initial typology of narratives - complexity, contributory context, unintended consequences, and reframing - as a first step towards overcoming conceptual barriers to effective civic engagement in policy making.

Farina, C., Newhart, M. & Heidt, J. (2014). Rulemaking vs. Democracy: Judging and Nudging Public Participation That Counts. Environmental Law Reporter, 44 10670, August 2014.

An underlying assumption of many open government enthusiasts is that more public participation will necessarily lead to better government policymaking: If we use technology to give people easier opportunities to participate in public policymaking, they will use these opportunities to participate effectively. Yet, experience thus far with technology-enabled rulemaking (e-rulemaking)has not confirmed this "if-then" causal link. Such causal assumptions1 include several strands: If we give people the opportunity to participate, they will participate. If we alert people that government is making decisions important to them, they will engage with that decisionmaking. If we make relevant information available, they will use that information meaningfully. If we build it, they will come. If they come, we will get better government policy.

This Article considers how this flawed causal reasoning around technology has permeated efforts to increase public participation in rulemaking. The observations and suggestions made here flow from conceptual work and practical experience in the Regulation Room project. Regulation Room is an ongoing research effort by the Cornell eRulemaking Initiative (CeRI), a multidisciplinary group of researchers who partner with the U.S. Department of Transportation (DOT) and other federal agencies. At the core is an experimental online public participation platform that offers selected "live" agency rulemakings. The goal is discovering how information and communication technologies (ICTs) can be used most effectively to engender broader, better participation in rulemaking and similar types of policymaking.

Farina, C., Epstein, D., Heidt, J., & Newhart, M. (2014). Designing an Online Civic Engagement Platform: Balancing "More" vs. "Better" Participation in Complex Public Policymaking. International Journal of E-Politics, 5(1) 16-40, January-March 2014.

A new form of online citizen participation in government decisionmaking has arisen in the United States (U.S.) under the Obama Administration. "Civic Participation 2.0" attempts to use Web 2.0 information and communication technologies to enable wider civic participation in government policymaking, based on three pillars of open government: transparency, participation, and collaboration. Thus far, the Administration has modeled Civic Participation 2.0 almost exclusively on a universalist/populist Web 2.0 philosophy of participation.

In this model, content is created by users, who are enabled to shape the discussion and assess the value of contributions with little information or guidance from government decisionmakers. The authors suggest that this model often produces "participation" unsatisfactory to both government and citizens. The authors propose instead a model of Civic Participation 2.0 rooted in the theory and practice of democratic deliberation. In this model, the goal of civic participation is to reveal the conclusions people reach when they are informed about the issues and have the opportunity and motivation seriously to discuss them. Accordingly, the task of civic participation design is to provide the factual and policy information and the kinds of participation mechanisms that support and encourage this sort of participatory output. Based on the authors' experience with Regulation Room, an experimental online platform for broadening effective civic participation in rulemaking (the process federal agencies use to make new regulations), the authors offer specific suggestions for how designers can strike the balance between ease of engagement and quality of engagement - and so bring new voices into public policymaking processes through participatory outputs that government decisionmakers will value.

Epstein, D., Newhart, M., & Vernon, R. (2014). Not by Technology Alone: The "Analog" Aspects of Online Public Engagement in Policymaking. Government Information Quarterly 31(2) (2014) 337-344.

Between Twitter revolutions and Facebook elections, there is a growing belief that information and communication technologies are changing the way democracy is practiced. The discourse around e-government and online deliberation is frequently focused on technical solutions and based in the belief that if you build it correctly they will come. This paper departs from the literature on digital divide to examine barriers to online civic participation in policy deliberation.

While most scholarship focuses on identifying and describing those barriers, this study offers an in-depth analysis of what it takes to address them using a particular case study. Based in the tradition of action research, this paper focuses on analysis of practices that evolved in Regulation Room - a research project of CeRI (Cornell eRulemaking Initiative) that works with federal government agencies in helping them engage public in complex policymaking processes. It draws a multidimensional picture of motivation, skill, and general political participation divides; or the "analog" aspects of the digital divide in online civic participation and policy deliberation.

Park, J. & Cardie, C., (2014). Identifying Appropriate Support for Propositions in Online User Comments. Proceedings of the ACL 2014 Workshop on Argumentation Mining. Baltimore, MD: ACL.(2014)

The ability to analyze the adequacy of supporting information is necessary for determining the strength of an argument. This is especially the case for online user comments, which often consist of arguments lacking proper substantiation and reasoning. Thus, we develop a framework for automatically classifying each proposition as UNVERIFIABLE, VERIFIABLE NONEXPERIENTIAL, or VERIFIABLE EXPERIENTIAL, where the appropriate type of support is reason, evidence, and optional evidence, respectively. Once the existing support for propositions are identified, this classification can provide an estimate of how adequately the arguments have been supported. We build a goldstandard dataset of 9,476 sentences and clauses from 1,047 comments submitted to an eRulemaking platform and find that Support Vector Machine (SVM) classifiers trained with n-grams and additional features capturing the verifiability and experientiality exhibit statistically significant improvement over the unigram baseline, achieving a macro-averaged F1 of 68.99%.

Farina, Cynthia R. and Newhart, Mary J., (2013). Rulemaking 2.0: Understanding and Getting Better Public Participation (IBM Center for the Business of Government Report, Using Technology Series). Washington, DC: IBM Center for the Business of Government.

More than a decade after the launch of Regulations.gov, the government-wide federal online rulemaking portal, and nearly four years since the Obama Administration directed agencies to use "innovative tools and practices that create new and easier methods for public engagement," there are still more questions than answers about what value social media and other Web 2 .0 technologies can bring to rulemaking-and about how agencies can realize that value.

This report, commissioned by the IBM Center for the Business of Government, begins to provide those answers. Drawing on insights from a number of disciplines and on three years of actual experience in the Regulation Room project, CeRI researchers explain the barriers that new rulemaking participants must overcome. And they make specific recommendations for lowering these barriers using outreach strategies, information design, and choice of participation tools. Although the particular focus is public participation in the context of rulemaking, much of what is discussed here will help any government or civil society group seeking broader, better public engagement in complex policy decisions.

Farina, C.R., Epstein, D., Heidt, J.B., & Newhart, M.J. (2013) Regulation Room: Getting "More, Better" Civic Participation in Complex Government Policymaking , Transforming Government: People, Process and Policy, Vol. 7 Iss: 4, pp. 501 - 516.

Purpose - Rulemaking (the process agencies use to make new health, safety, social and economic regulations) is one of the U.S. government's most important policymaking methods and has long been a target for eGovernment efforts. Although broad transparency and participation rights are part of its legal structure, significant barriers prevent effective engagement by many citizens.

Design/methodology/approach - RegulationRoom.org is an online experimental eParticipation platform, designed and operated by CeRI, the cross-disciplinary Cornell eRulemaking Initiative. Using the Regulation Room as a case study, this paper addresses what capacities are required for effective civic engagement and how they can be nurtured and supported by an online participation system.

Findings - Our research suggests that effectively designing and deploying technology, although essential, is only one dimension of realizing broader, better online civic engagement. Effective eParticipation systems must be prepared to address procedural, social, and psychological barriers that impede citizens' meaningful participation in complex policymaking processes. Our research also suggests the need for re-conceptualizing the value of broad civic participation to the policymaking processes and for recognizing that novice commenters engage with policy issues differently than experienced insiders.

Practical implications - The paper includes a series of strategic recommendations for policymaking seeking public input. While it indicates that a broader range of citizens can indeed be meaningfully engaged, it also cautions that getting better participation from more people requires the investment of resources. More fundamental, both government decisionmakers and participation designers must be open to recognizing non-traditional forms of knowledge and styles of communication - and willing to devise participation mechanisms and protocols accordingly.

Originality/value - This paper describes lessons from a unique design-based research project with both practical and conceptual implications for more, better civic participation in complex government policymaking.

Farina, C.R., Newhart, M.J., Heidt, J., & Solivan, J. (2013). Balancing Inclusion and "Enlightened Understanding" in Designing Online Civic Participation Systems: Experiences from Regulation Room. Proceedings of the 14th Annual International Conference on Digital Government Research. Quebec City, Quebec: ACM.

New forms of online citizen participation in government decision making have been fostered in the United States (U.S.) under the Obama Administration. Use of Web information technologies have been encouraged in an effort to create more back-and-forth communication between citizens and their government. These "Civic Participation 2.0" attempts to open the government up to broader public participation are based on three pillars of open government---transparency, participation, and collaboration.

Thus far, the Administration has modeled Civic Participation 2.0 almost exclusively on the Web 2.0 ethos, in which users are enabled to shape the discussion and encouraged to assess the value of its content. We argue that strict adherence to the Web 2.0 model for citizen participation in public policymaking can produce "participation" that is unsatisfactory to both government decisionmakers and public participants. We believe that successful online civic participation design must balance inclusion and "enlightened understanding," one of the core conditions for democratic deliberation. Based on our experience with Regulation Room, an experimental online participation platform trying to broaden meaningful public engagement in the process federal agencies use to make new regulations, we offer specific suggestions on how participation designers can strike the balance between ease of engagement and quality of engagement---and so bring new voices into the policymaking process through participating that counts.

Solivan, J. & Farina, C.R. (2013). Regulation Room: How the Internet Improves Public Participation in Rulemaking. Proceedings of the Marine Safety & Security Council, 69(4) and 70(1), 58-82.

Cornell eRulemaking Initiative (CeRI) designed and operated Regulation Room, a pilot project that provides an online environment for people and groups to learn about, discuss, and react to selected proposed federal rules.

The project is a unique collaboration between CeRI academic researchers and the government. The U.S. Department of Transportation (USDOT) was CeRI's first agency partner and chose Regulation Room as its first open government "flagship initiative." USDOT received a White House Open Government Leading Practices Award for its collaboration in the project. CeRI owns, designs, operates, and controls Regulation Room, but works closely with partner agencies to identify suitable "live" rulemakings for the site and to evaluate success after a rule closes.

Farina, C.R., Epstein, E., Heidt, J., & Newhart, M.J. (2012). Knowledge in the People: Rethinking "Value" in Public Rulemaking Participation. Wake Forest Law Review, 47(5), 1185-1241.

A companion piece to Rulemaking vs. Democracy: Judging and Nudging Public Participation that Counts , this Essay continues to examine the nature and value of broader public participation in rulemaking. Here, we argue that rulemaking is a "community of practice," with distinctive forms of argumentation and methods of reasoning that both reflect and embody craft knowledge.

Rulemaking newcomers are outside this community of practice: Even when they are reasonably informed about the legal and policy aspects of the agency's proposal, their participation differs in kind and form from that of sophisticated commenters. From observing the actual behavior of rulemaking newcomers in the Regulation Room project, we suggest that new public participation is often, if not predominantly, experiential in nature and narrative in form. We argue that it is unrealistic to expect that rulemaking newcomers can be significantly inculcated into the norms and methods of the existing rulemaking community of practice. Yet, the potential policymaking value of the on-the-ground, situated knowledge they can bring to the discussion justifies efforts to expand our understanding of the kinds of comments that should "count" in the process. We take some first steps in that direction in this Essay.

Park, J., Cardie, C., Farina, C.R., Klingel, S., Newhart, M., & Vallbé, J.J. (2012). Facilitative Moderation for Online Participation in eRulemaking. Proceedings of the 13th Annual International Conference on Digital Government Research. College Park, MD: ACM.

This paper describes the use of facilitative moderation strategies in an online rulemaking public participation system. Rulemaking is one of the U.S. government's most important policymaking methods.

Although broad transparency and participation rights are part of its legal structure, significant barriers prevent effective engagement by many groups of interested citizens. Regulation Room, an experimental open-government partnership between academic researchers and government agencies, is a socio-technical participation system that uses multiple methods to lower potential barriers to broader participation. To encourage effective individual comments and productive group discussion in Regulation Room, we adapt strategies for facilitative human moderation originating from social science research in deliberative democracy and alternative dispute resolution [24, 1, 18, 14] for use in the demanding online participation setting of eRulemaking. We develop a moderation protocol, deploy it in "live" Department of Transportation (DOT) rulemakings, and provide an initial analysis of its use through a manual coding of all moderator interventions with respect to the protocol. We then investigate the feasibility of automating the moderation protocol: we employ annotated data from the coding project to train machine learning-based classifiers to identify places in the online discussion where human moderator intervention is required. Though the trained classifiers only marginally outperform the baseline, the improvement is statistically significant in spite of limited data and a very basic feature set, which is a promising result.

Farina, C.R., Heidt, J., Newhart, M.J., & Vallbé, J.J. (2012). RegulationRoom: Field-Testing An Online Public Participation Platform During USA Agency Rulemakings. In M. Gascoì (Ed.), Proceedings of 12th European Conference on eGovernment. Reading: Academic Publishing International.

Rulemaking is one of the U.S. government's most important policymaking methods. Although broad transparency and participation rights are part of its legal structure, significant barriers prevent effective engagement by many groups of interested citizens.

RegulationRoom, an experimental open-government partnership between academic researchers and government agencies, is a socio-technical participation system that uses multiple methods to alert and effectively engage new voices in rulemaking. Initial results give cause for optimism but also caution that successful use of new technologies to increase participation in complex government policy decisions is more difficult and resource-intensive than many proponents expect.

Farina, C.R., Newhart, M., & Heidt, J. (2012). Rulemaking vs. Democracy: Judging and Nudging Public Participation that Counts. Michigan Journal of Environmental & Administrative Law, 2(1), 123-217.

Open government enthusiasts assume that more public participation will lead to better government policymaking: If we use technology to give people easier opportunities to participate, they will use these opportunities to participate effectively.

However, experience with technology-enabled rulemaking (e-rulemaking) belies this assumption. Engagement of new participants most often takes the form of mass comment campaigns orchestrated by advocacy groups. Challenging the conventional highly negative response to mass commenting, Prof. Nina Mendelson has recently argued that, in a democratic government, agencies should give at least some weight to the value preferences expressed in such comments when rulemaking involves value judgments. Engaging this important argument, we propose a framework for assessing the value of technology-enabled rulemaking participation. Our position -- that the types of preferences expressed in mass comments may be good enough for electoral democracy but they are not good enough for even heavily value-laden rulemaking -- challenges both the Web 2.0 ethos and the common open-government belief that more public participation, of any kind, is a good thing. In rulemaking and similar complex policymaking processes, more public participation is good only if it is the kind of participation that has value in the process. We offer specific principles of participation-system design that are drawn from both normative conceptions of the responsibilities of a democratic government and from the design-based research being carried on by the CeRI (Cornell eRulemkaing Iniative) in the Regulation Room project. We argue that design of civic engagement systems must involve a purposeful and continuous effort to balance "more" and "better" participation, and stress that a democratic government should not actively facilitate public participation that it does not value.

Farina, C. R., Newhart, M. J., Cardie, C., & Cosley, D. (2011). Rulemaking 2.0. University of Miami Law Review, 65( 2), 395-448.

In response to President Obama's Memorandum on Transparency and Open Government, federal agencies are on the verge of a new generation in online rulemaking. However, unless we recognize the several barriers to making rulemaking a more broadly participatory process, and purposefully adapt Web 2.0 technologies and methods to lower those barriers, Rulemaking 2.0 is likely to disappoint agencies and open-government advocates alike.

This article describes the design, operation, and initial results of Regulation Room, a pilot public rulemaking participation platform created by a cross-disciplinary group of Cornell researchers in collaboration with the Department of Transportation. Regulation Room uses selected live rulemakings to experiment with human and computer support for public comment. The ultimate project goal is to provide guidance on design, technological, and human intervention strategies, grounded in theory and tested in practice, for effective Rulemaking 2.0 systems.
Early results give some cause for optimism about the open-government potential of Web 2.0-supported rulemaking. But significant challenges remain. Broader, better public participation is hampered by 1) ignorance of the rulemaking process; 2) unawareness that rulemakings of interest are going on; and 3) information overload from the length and complexity of rulemaking materials. No existing, commonly used Web services or applications are good analogies for what a Rulemaking 2.0 system must do to lower these barriers. To be effective, the system must not only provide the right mix of technology, content, and human assistance to support users in the unfamiliar environment of complex government policymaking; it must also spur them to revise their expectations about how they engage information on the Web and also, perhaps, about what is required for civic participation.

Farina, C. R., Miller, P., Newhart, M. J., Cardie, C., Cosley, D., & Vernon, R. (2011). Rulemaking in 140 Characters or Less: Social Networking and Public Participation in Rulemaking. Pace Law Review, 31(1), 382-463.

Rulemaking - the process by which administrative agencies make new regulations -- has long been a target for e-government efforts. The process is now one of the most important ways the federal government makes public policy. Moreover, transparency and participation rights are already part of its legal structure. The first generation of federal e-rulemaking involved putting the conventional process online by creating an e-docket of rulemaking materials and allowing online submission of public comments. Now the Obama Administration is urging agencies to embark on the second generation of technology-assisted rulemaking, by bringing social media into the process.

In this article we describe the initial results of a pilot Rulemaking 2.0 system, Regulation Room, with particular emphasis on its social networking and other Web 2.0 elements. (A companion article, Rulemaking 2.0, gives a more general overview of the project and is forthcoming in Miami Law Review). Web 2.0 technologies and methods seem well suited to overcoming one of the principal barriers to broader, better public participation in rulemaking: unawareness that a rulemaking of interest is going on. We talk here about the successes and obstacles to social-media based outreach in the first two rulemakings offered on Regulation Room. Our experience confirms the power of viral information spreading on the Web, but also warns that outcomes can be shaped by circumstances difficult, if not impossible, for the outreach effort to control.
There are two additional substantial barriers to broader, better public participation in rulemaking: ignorance of the rulemaking process, and the information overload of voluminous and complex rulemaking materials. Social media are less obviously suited to lowering these barriers. We describe here the design elements and human intervention strategies being used in Regulation Room, with some success, to overcome process ignorance and information overload. However, it is important to recognize that the paradigmatic Web 2.0 user experience involves behaviors fundamentally at odds with the goals of such strategies. One of these is the ubiquitousness of voting (through rating, ranking, and recommending) as "participation" online. Another is what Web guru Jacok Neilsen calls the ruthlessness of users in moving rapidly through web sites, skimming rather than carefully reading content and impatiently seeking something to do quickly before they move on. Neither of these behaviors well serves those who would participate effectively in rulemaking. For this reason, Rulemaking 2.0 systems must be consciously engaged in culture creation, a challenging undertaking that requires simultaneously using, and fighting, the methods and expectations of the Web.

Bruce, T.L., Cardie, C., Farina, C.R., & Purpura, S. (2008). Facilitating Issue Categorization & Analysis in Rulemaking. Proceedings of the 9th Annual International Conference on Digital Government Research. Montreal, Canada: Digital Government Society of North America.

One task common to all notice-and-comment rulemaking is identifying substantive claims and arguments made in the comments by stakeholders and other members of the public. Extracting and summarizing this material may be helpful to internal decisionmaking; to produce the legally required public explanation of the final rule, it is essential.

When comments are lengthy or numerous, natural language processing and machine learning techniques can help the rulewriter work more quickly and comprehensively. Even when a smaller volume of comment material is received, the ability to annotate relevant portions and store information about them in a way that permits retrieval and generation of reports can be useful to the agency, especially over time. We describe a prototype application for these purposes. The Workspace for Issue Categorization and Analysis (WICA) allows the rulewriter to create a list of relevant substantive categories and assign them to marked portions of comment text. She can then retrieve all instances of a given issue within the comment pool. Preliminary results of experiments that apply text categorization and active learning methods to comment sets suggest that these techniques can facilitate the marking and category assignment process in lengthy or numerous comment sets. WICA will incorporate these techniques. Other possible applications of WICA within the rulemaking process are discussed.

Cardie, C. , Farina, C.R., Aijaz, A., Rawding, M., & Purpura, S. (2008). A Study in Rule-Specific Issue Categorization for e-Rulemaking. Proceedings of the 9th Annual International Conference on Digital Government Research. Montreal, Canada: Digital Government Society of North America.

We address the e-rulemaking problem of categorizing public comments according to the issues that they address. In contrast to previous text categorization research in e-rulemaking, and in an attempt to more closely duplicate the comment analysis process in federal agencies, we employ a set of rule-specific categories, each of which corresponds to a significant issue raised in the comments.

We describe the creation of a corpus to support this text categorization task and report interannotator agreement results for a group of six annotators. We outline those features of the task and of the e-rulemaking context that engender both a non-traditional text categorization corpus and a correspondingly difficult machine learning problem. Finally, we investigate the application of standard and hierarchical text categorization techniques to the e-rulemaking data sets and find that automatic categorization methods show promise as a means of reducing the manual labor required to analyze large comment sets: the automatic annotation methods approach the performance of human annotators for both flat and hierarchical issue categorization.

Cardie, C., Farina, C.R., Rawding, M., & Aijaz, A. (2008). An eRulemaking Corpus: Identifying Substantive Issues in Public Comments. Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC 2008), Marrakech, Morocco, 2008.

We describe the creation of a corpus that supports a real-world hierarchical text categorization task in the domain of electronic rulemaking (eRulemaking). Features of the task and of the eRulemaking domain engender both a non-traditional text categorization corpus and a correspondingly difficult machine learning task.

Interannotator agreement results are presented for a group of six annotators. We also briefly describe the results of experiments that apply standard and hierarchical text categorization techniques to the eRulemaking data sets. The corpus is the first in a series of related sentence-level text categorization corpora to be developed in the eRulemaking domain.

Purpura, S., Cardie, C., & Simons, J. (2008). Active Learning for e-Rulemaking: Public Comment Categorization. Proceedings of the 9th Annual International Conference on Digital Government Research. Montreal, Canada: Digital Government Society of North America.

We address the e-rulemaking problem of reducing the manual labor required to analyze public comment sets. In current and previous work, for example, text categorization techniques have been used to speed up the comment analysis phase of e-rulemaking by classifying sentences automatically, according to the rule-specifc issues or general topics that they address.

Manually annotated data, however, is still required to train the supervised inductive learning algorithms that perform the categorization. This paper, therefore, investigates the application of active learning methods for public comment categorization: we develop two new, general-purpose, active learning techniques to selectively sample from the available training data for human labeling when building the sentence-level classiers employed in public comment categorization. Using an e-rulemaking corpus developed for our purposes, we compare our methods to the well-known query by committee (QBC) active learning algorithm and to a baseline that randomly selects instances for labeling in each round of active learning. We show that our methods statistically signifcantly exceed the performance of the random selection active learner and the query by committee (QBC) variation, requiring many fewer training examples to reach the same levels of accuracy on a held-out test set. This provides promising evidence that automated text categorization methods might be used effectively to support public comment analysis.

Committee on the Status and Future of Federal e-Rulemaking (U.S.), & American Bar Association. (2008). Achieving the Potential: The Future of Federal E-Rulemaking. Chicago: Section of Administrative Law and Regulatory Practice, American Bar Association.

Cardie, C., Farina, C.R., Bruce, T., & Wagner, E. (2006). Using Natural Language Processing to Improve e-Rulemaking. Proceedings of the 7th Annual International Conference on Digital Government Research. San Diego, CA: Digital Government Research Center.

Cardie, C., Farina, C.R., Bruce, T., & Wagner, E. (2006). Better Inputs for Better Outcomes: Using the Interface to Improve e-Rulemaking. Proceedings of the Workshop on eRulemaking at the Crossroads, 7th Annual International Conference on Digital Government Research. San Diego, CA: Digital Government Research Center.

We believe that e-rulemaking does indeed have potential to increase both the transparency of, and participation in, regulatory policymaking. We argue in this paper that this potential can be realized only if the public interface at www.regulations.gov is substantially redesigned.