pecha kucha

  • Date: Thursday, June 5
  • Time: 15:20-16:40
  • Location: TRS 1-067 (map)
  • Chair: Amber Leahey

NOTE: The final order of presentations may not be as shown below.


A Playbook on Obtaining Funding to Archive a Prominent Longitudinal Study

  • Presenter(s): Chiu-chuang Chou, Center for Demography of Health and Aging, University of Wisconsin
  • Presentation: 2014_PK_Chou.pdf
  • Abstract: The Center for Demography on Health and Aging (CDHA) at the University of Wisconsin-Madison recently received a small research grant (R03AG045503) from the National Institute on Aging (NIA) to archive three waves of the National Survey of Families and Households (NSFH). This project will evaluate, organize and prepare all public-use data and documentation files from the NSFH project website (http://www.ssc.wisc.edu/nsfh) for archiving in publicly-accessible archives. I will share our experience in writing grant proposals and the evaluation procedure on R03 grant proposals at NIA. Methods and specific aims for this project will be described as well.

Back to top


Data Visualization and Information Literacy

  • Presenter(s): Ryan Womack, Rutgers University Libraries
  • Presentation: 2014_PK_Womack.pdf
  • Abstract: What is the place for data visualization in information literacy? Data librarians are typically expert in explaining the use of specific databases and software tools to locate and analyze information, but data visualization outside of the specialized context of GIS is typically given short shrift. The Pecha Kuchka illustrates the elements of data visualization best practices that deserve to be made a part of the data librarian’s standard teaching toolkit, and also makes recommendations for which aspects of data visualization are valuable for general inclusion in information literacy goals.

Back to top


Visualisation of the Swiss research inventory

  • Presenter(s): Andreas Perret, FORS, Swiss Center of Expertise in the social sciences
  • Presentation: 2014_PK_Perret.ppt
  • Abstract: Switzerland has been running a research inventory in the social sciences for the last 20 years. Little has been published on its contents. We choose to explore this dataset using the open source network graphing tool Gephi as well as visualization instruments Sci2 used for scientometry and made available by the University of Indiana (and introduced in a now famous MOOC on information visualization). The first results have shown an interesting picture of the collaboration and financing flows within the country, as well as the limits of the analysis of research abstracts with tools made for an Anglo-American context. Research in Switzerland is described in English, German, French and sometimes Italian with frequent inclusion of foreign expressions. Such situations certainly occur in other countries and we intend to explore ways to solve these issues. Our aim is also to share some of the lessons learned in this work, among which the use of sql queries to build the input data, and also the effects on the visual outputs of choices made while processing the data. In the absence of a proper documentation, these choices turn scientific visualization into black boxes that become a new challenge for the curious scientist.

Back to top


When I grow up I want to be a data scientist / work in policy-related research / make a difference. Can you help me?

  • Presenter(s): Jackie Carter, University of Manchester
  • Presentation: 2014_PK_Carter.pptx
  • Abstract: In 2013 at The Campaign for Social Science event in the UK, David Willetts, Minister for Higher Education and Skills pronounced in a world of increasing volume of social data there is an urgent need to “have properly qualified people to exploit and use the data”. At present [the UK has] a serious shortage of social science graduates with the right quantitative skills to evaluate evidence and analyse data. This is not a new finding. Significant and shared efforts in the last decade have resulted in a large, national initiative being funded to tackle this problem in the UK (Q-Step, Nuffield 2013). This PK will highlight the void in teaching quantitative social science, and show how we (the ESSTED team at Manchester) are addressing the challenge of embedding number into the social science curriculum. A variety of techniques have been adopted and trialled; using real world survey data in the classroom alongside making students part of the dataset; flipping lectures; adopting the mantra of ‘practice’ for what is a real-world and employable skill. The PK will present: results based on evidence collated; conclusions based on our experiences; and ideas about how this is informing the University of Manchester’s Q-Step centre.

Back to top


Exploring how to raise awareness of a data service through academic libraries in the UK

  • Presenter(s): Margherita Ceraolo, UK Data Service, University of Manchester
  • Presentation: 2014_PK_Ceraolo.pptx
  • Abstract: The UK Data Service, as part of its marketing strategy, is investigating how to raise awareness through academic libraries. This research, conducted by an intern, takes the form of a case study exploring how to embed the UK Data Service within four UK academic libraries. The institutions were selected based on data usage and the methodology qualitative – the data is collected through semi-structured interviews. Background information gathered about libraries’ structures and strategies suggests that the effect of expense justification on the support of data providers by libraries poses challenges to free-at-the-point-of-use (free) data services. In other words it could result in less focus on support in the form of training through tutorials or focus groups. The following question arises: are ‘free’ services less supported by libraries than those that require fees because of the need to justify membership expenditures? This presentation explores how data services can improve interactions with academic libraries in the UK. By closely examining the strategy of four UK universities, it sheds light on the challenges in raising awareness of a data provider like the UK Data Service through collaborating with libraries. It aims to discover whether the study’s findings can be applied to other institutions.

Back to top


Streamlining the research data archival process at Johns Hopkins University

  • Presenter(s): Jonathan Petters, Johns Hopkins University Data Management Services
  • Presentation: 2014_PK_Petters.pptx
  • Abstract: Among other research data management services, Johns Hopkins University Data Management Services (JHUDMS, http://dmp.data.jhu.edu) provides its researchers the opportunity to preserve and share their data through the JHU Data Archive. This archive is a research data-specific repository that can host a wide variety of quantitative and qualitative data, and is both format- and discipline-agnostic.

    We in JHUDMS have begun archiving research data originating from two NSF-funded engineering research projects. This archiving process has begun with data associated with publications, which may be a typical model for library research data archives. These efforts have been an opportunity to understand the time and effort required for activities that can add value to a research data collection (e.g. discussions with the researcher, development of data flow diagrams, migration of data to non-proprietary formats). We will discuss these collections and the steps taken to create them.

    It is of benefit to both the researchers and JHUDMS to scope and streamline the archiving process with research data associated with publications in mind. We will discuss our current understanding of the most effective curation activities we can efficiently accomplish, parameters for those activities, and those elements perceived by the researcher to be most valuable.

Back to top


Participant Observer: The A2DataDive

  • Presenter(s): Lisa Neidert, University of Michigan
  • Abstract: The School of Information and the Institute for Social Research at the University of Michigan sponsored a Data Dive. It was a 30-hour hackathon-style service event to help three local non-profits make sense of the data they had in their administrative records. The volunteer data scientists, coders, designers, and consultants chose which project to work on. The needs of the non-profits varied from visualizations to analysis. As a participant observer I will report on (a) what do the non-profits look like compared to our normal consultations; (b) what do the hackers look like – are they potential IASSISTers or the future of IASSIST or something else; (c) what tools did the hackers use; (d) what parts of the Data Dive were most like an IASSIST conference?; and (e)what does it take to put on a Data Dive besides cool dry-erase table tops?

Back to top


DDI3 Metrics

  • Presenter(s): Claude Gierl, Centre for Longitudinal Studies, Institute of Education; Jon Johnson
  • Presentation: 2014_PK_Gierl.ppt
  • Abstract: The Centre for Longitudinal Studies (CLS) and the CLOSER (Cohorts and Longitudinal Studies Enhancement Resources) programme are, in the United Kingdom, in the process of translating legacy paper questionnaires to electronic format on a large scale. The ultimate goal of this programme is to gather the metadata of nine birth cohort studies into a joint DDI3 repository for online searching. In order to monitor the progress of this operation we define metrics which reflect the characteristics of the DDI3 schema, more particularly, its heavy reliance on references, whereby, whenever possible, a DDI3 element is defined once and reused by reference wherever it is required. We measure the volume of the metadata ingested with a Cell Count of the DDI3 xml elements produced and the quality of the metadata with the Synaptic Density, the ratio of the DDI3 references over the Cell Count. The Synaptic Density is expected to have a dual role over the course of the ingestion. In the earlier phases, it measures the lack of redundancy and monitors the cleaning and deduplication processes. In the more mature phases of the repository, it is expected to rise again as re-use, derivations and harmonisations gradually interweave and enrich the metadata.

Back to top


Dot.Stat: It’s All That

  • Presenter(s): Richard Wiseman, UK Data Service; Susan Noble
  • Presentation: 2014_PK_Wiseman.pptx
  • Abstract: In this presentation we will introduce ‘UKDS.Stat’ the new data delivery platform for international macrodata at the UK Data Service. We will describe the exciting new features of UKDS.Stat including:
    • fully integrated metadata
    • data visualisation
    • search across all datasets in one platform
    • save and share data subsets as queries
    • combine data from different datasets

    We will then describe how we have made it possible for users to access both Open and protected data within the same platform. Finally we will illustrate how our implementation of authentication for UKDS.Stat was shared with the Statistical Information Systems Collaboration Community (SIS-CC) – a group of organisations co-developing ‘.Stat’. The OECD led SIS -CC was set up so that members could benefit from a broad collaboration, sharing of experiences, knowledge and best practices, and to enable cost-effective innovation in a minimal time. Members include the International Monetary Fund, the European Commission, Australian Bureau of Statistics and UNESCO.

Back to top