Help Make Citizen Science Tools Accessible and Discoverable

SciStarter, a longtime collaborator of the Wilson Center, has always been a platform for the citizen science community to find projects to contribute to. But for many projects and would-be participants, there are significant challenges to finding and accessing the right tools for collecting and sharing scientific information.

To meet this need, SciStarter is expanding their platform to collect information on a range of tools that projects and volunteers can build, borrow, or buy.  The Wilson Center, a proud partner in this initiative, recently co-hosted a Workshop at the Arizona State University Citizen Science and Maker Summit to help design an initial taxonomy for the SciStarter Tools Database. Our next step towards finalizing this taxonomy is testing the initial list of fields by collecting information on a range of tools used by citizen science project coordinators and volunteers.

If there is a citizen science tool that you sponsor, fund, design, build, calibrate, distribute, or otherwise consider yourself an expert on, we need your help. Please add your tool to the beta version of the SciStarter Tools database by filling out this form (note: you will need a SciStarter user account to submit your information). The information you share will help us better understand how people articulate important information about citizen science tools before the database structure is finalized and populated with hundreds of tools from around the world.


Photo: A kit containing the tools required to participate in a citizen science project measuring soil moisture. For more information on citizen science tool kits, check out this video

While the taxonomy included in the beta version of the SciStarter Tools database builds upon the contributions of our workshop participants, the workshop was a single step in a longer process of user-centered design.

  1.  In early 2016, SciStarter and Arizona State University interviewed 110 people about their citizen science tool needs. The core project team, which included SciStarter founder Darlene Cavalier, ASU Assistant Professor of Engineering Dr. Micah Lande, and students Brianne Fisher, David Sittenfeld, and Erica Prange, used lean launch methods to identify key “pain points” for the “customer segment” of citizen science volunteers and project organizers. Insights gleaned from this approach are documented here.
  2. Led by Erica Prange, the SciStarter team then surveyed 50 project owners about their tools. These responses were instrumental in building the initial database taxonomy.
  3. SciStarter, the Wilson Center, and Arizona State University held a workshop to test the use of key database fields through personas and use case to validate key hypotheses and identify new information to include.
  4. Based on this research, a beta version of the SciStarter Tools database will be tested by project coordinators and volunteers who enter information their tools here.
  5. Entries to the beta database will be tested and reviewed by an expert advisory panel. This step will help ensure, for example, manufacturer specifications for longevity, data quality, and other factors are accurate in a range of use conditions.

Thanks to a growing number of databases including SciStarter, the Federal Catalog, and, information about citizen science projects is now very easy to find. Information about tools- including the hardware cataloged by SciStarter, and also the software required to support data management and other key phases of the scientific research process- remains elusive. The SciStarter Tools database is one crucial step towards a research ecosystem where the public can access both opportunities for contributing to science, and the tools required to get research done.

AI- Policies for the Next Administration

This piece considers the general state of policy on AI and also gives some numbers to the federal activity encouraging research and application in this field.

Growing interest in Artificial Intelligence (AI) comes from a range of audiences including academia, commercial industry, the entertainment industry, and now the White House Office of Science and Technology Policy (OSTP). On October 12, 2016, OSTP released two new documents on AI. The National Artificial Intelligence Research and Development Strategic Plan is a high level framework for prioritizing and coordinating federal research and development (R&D) to advance AI. A companion piece, Preparing for the Future of Artificial Intelligence, discusses the technologies that can be identified as AI and how these may be used to benefit society. A third document, to be authored by the President’s Council of Economic Advisors and released later this year, will explore how AI might affect employment.

AI—along with related initiatives on topics ranging from precision medicine to the Internet of Things (IoT)—was also showcased at the White House Frontiers Conference. But the winds of change blow fiercely in election years. With the presidential election approaching, one key task for the incoming administration will be to take account of current OSTP policies and determine whether these will be bolstered or recast. The rest of this piece considers: How can the next administration advance current policies? And what can be done in and beyond the White House to ensure that AI R&D remains a national priority in the coming years?

There are many technologies that can be grouped under the designation of AI, from general techniques to specific applications.  One general technology is deep learning, which is a set of techniques for gradually structuring data by defining it in layers. A more specific application is machine translation, which powers the familiar service Google Translate. While many AI technologies were initially created in research laboratories, innovation in AI technology is increasingly driven by industry, thanks to (for example) market incentives for investing in digital personal assistants and autonomous driving.

Any administration that makes AI a priority will want to see the US emerge as a global leader. Unfortunately, as a relatively new area of broad public interest the exact levels of financial investment in AI research are not well-defined. Thus, determining where funding originates or how exactly it is spent is challenging. As one starting point, the National AI R&D strategy gives a number of $1.1 billion in unclassified, federally funded R&D for AI. [1] However, without more statistics for AI investment, it is difficult to compare US federal investments with funding in other countries. Some other approximations of productivity are possible, including comparing the number of relevant journal articles and patents in countries like China and the US (e.g., NSTC’s strategy in the National Artificial Intelligence Research and Development Strategic Plan), or comparing the number of computer science publications by country.[2] Continue reading “AI- Policies for the Next Administration”

EVENT: Data Journalism and Policymaking: A Changing Landscape

Data journalism handbook
Data Handbook from The Guardian

What is data journalism? Why does it matter? How has the maturing field of data science changed the direction of journalism and global investigative reporting? Our speakers will discuss the implications for policymakers and institutional accountability, and how the balance of power in information gathering is shifting worldwide, with implications for decision-making and open government.

Wednesday, July 30th, 10:00am – 12:00pm,

5th Floor Conference Room, Woodrow Wilson Center



Tweet your questions @STIPcommonslab

Follow the event on twitter #datajournalism

Read the report Art and Science of Data Journalism here.


Kalev H. Leetaru is a is the 2013-2014 Yahoo! Fellow at Georgetown University, a Council Member of the World Economic Forum’s Global Agenda Council on the Future of Government, and a Foreign Policy Magazine Top 100 Global Thinker of 2013. For nearly 20 years he has been studying the web and building systems to interact with and understand the way it is reshaping our global society, and his work has been featured in the press of over 100 nations and fundamentally changed how we think about information at scale and how the “big data” revolution is changing our ability to understand our global collective consciousness.

Alexander B Howard is a writer and editor based in Washington, DC. Howard is a columnist at TechRepublic and founder of “E Pluribus Unum,” a blog focused on open government and technology. Previously, he was a fellow at the Tow Center for Digital Journalism at Columbia University, the Ash Center at Harvard University, the Washington Correspondent for O’Reilly Media, and an associate editor at and


Louise Lief is a journalist and media educator. From 1998 to 2012 she was the deputy director of the International Reporting Project (IRP) at the Johns Hopkins University School of Advanced International Studies. She designed and directed programs and launched initiatives to engage reporters and editors in a wide range of issues, and to explore how technological change impacts international newsgathering. Previously, she was a diplomatic correspondent and senior editor for U.S. News and World Report, and a journalist overseas based in Europe and the Middle East, contributing to a variety of news organizations, including the New York Times, CBS News “60 Minutes,” The Christian Science Monitor, and many others.

The Wilson Center
Ronald Reagan Building and International Trade Center
One Woodrow Wilson Plaza
1300 Pennsylvania Ave. NW
Washington, DC

This event is free and open to the public. Please allow time on arrival at the building for routine security procedures. A photo ID is required.

Individuals attending Woodrow Wilson Center events may be audiotaped, videotaped, or photographed during the course of a meeting, and by attending grant permission for their likenesses and the content of their comments, if any, to be broadcast, webcast, published, or otherwise reported or recorded.

Big Data: Seizing Opportunities, Preserving Values

Each day, humans upload more than 500 million photographs documenting every aspect of their lives. But while striking, this statistic pales in comparison to the vast quantity of information created not by humans, but about them. These data come from technologies as diverse as GPS-enabled Smartphones, wearable pedometers, and information captured in web logs and cookies.


Figure 1: Personal health devices such as Fitbit track metrics including distance walked, steps climbed, calories burned, and hours slept each night. Image credit:

This generated information is big data, defined as “large, diverse, complex, longitudinal and/or distributed datasets generated form instruments, sensors, Internet transactions, email, video, click streams, and/or all other digital sources available today and in the future.” Big data brings tremendous potential for advancing scientific research. One researcher studying 35,000 schizophrenia patients demonstrated a genetic variant that eluded previous researchers working with smaller sample sizes. But big data also sharpens the potential for subtle, or even invisible, forms of discrimination. For example, algorithms determining which audiences receive offers for student loans could be so finely tuned that they target only people of a certain, gender, race, or income bracket. Continue reading “Big Data: Seizing Opportunities, Preserving Values”

Appiro Testifies Before Congress on Crowdsourcing, Innovation and Prizes While Wearing Google Glass

Mr. Singh wearing google glass while testifying.

As Congress moves forward with integrating more prizes and challenges for crowdsourcing scientific research, one expert from Silicon Valley raises important issues in the government’s approach.

Narinder Singh, the co-founder and chief strategy officer at Appiro, a cloud based Technology Company that uses crowdsourcing to solve problems, was invited to speak before the House Science Committee’s research subcommittee earlier this month. Singh addresses the lawmakers in a hearing on “Prizes to Spur Innovation and Technology Breakthroughs.” Singh, addressed the committee while wearing Glass, a new wearable lens from Google that allows the user to take images, record and retrieve information using voice commands.

After an introduction to Appiro, Singh described the company’s [topcoder] program, which is a community of 600,000 designers, developers and data scientists who serve as an exclusive “crowd”  for crowdsourcing client problems. Using this group, Appiro breaks down complex problems into smaller projects, presenting them to the crowd to solve and awarding cash prizes for the best solutions. To date Appiro has partnered with NASA, Centers for Medicare and Medicaid Services and the National Institutes of Health to design and implement a variety of challenges and prizes using the topcoder crowd.

Continue reading “Appiro Testifies Before Congress on Crowdsourcing, Innovation and Prizes While Wearing Google Glass”