Finding high-quality grey literature for use as evidence in software engineering research.
Type of content
Publisher's DOI/URI
Thesis discipline
Degree name
Publisher
Journal Title
Journal ISSN
Volume Title
Language
Date
Authors
Abstract
Background: Software engineering research often uses practitioners as a source of evidence in their studies. This evidence is usually gathered through empirical methods such as surveys, interviews and ethnographic research. The web has brought with it the emergence of the social programmer. Software practitioners are publishing their opinions online through blog articles, discussion boards and Q&A sites. Mining these online sources of information could provide a new source of evidence which complements traditional evidence sources.
There are benefits to the adoption of grey literature in software engineering research (such as bridging the gap between the state–of–art where research typically operates and the state–of–practice), but also significant challenges. The main challenge is finding grey literature which is of high– quality to the researcher given the vast volume of grey literature available on the web. The thesis defines the quality of grey literature in terms of its relevance to the research being undertaken and its credibility. The thesis also focuses on a particular type of grey literature that has been written by soft- ware practitioners. A typical example of such grey literature is blog articles, which are specifically used as examples throughout the thesis.
Objectives: There are two main objectives to the thesis; to investigate the problems of finding high–quality grey literature, and to make progress in addressing those problems. In working towards these objectives, we investigate our main research question, how can researchers more effectively and efficiently search for and then select the higher–quality blog–like content relevant to their research? We divide this question into twelve sub–questions, and more formally define what we mean by ‘blog–like content.’
Method: To achieve the objectives, we first investigate how software engineering researchers define and assess quality when working with grey literature; and then work towards a methodology and also a tool–suite which can semi–automate the identification and the quality assessment of relevant grey literature for use as evidence in the researchers study.
To investigate how software engineering researchers define and assess quality, we first conduct a literature review of credibility assessment to gather a set of credibility criteria. We then validate those criteria through a survey of software engineering researchers. This gives us an overall model of credibility assessment within software engineering research.
We next investigate the empirical challenges of measuring quality and develop a methodology which has been adapted from the case survey methodology and aims to address the problems and challenges identified. Along with the methodology is a suggested tool–suite which is intended to help researchers in automating the application of a subset of the credibility model. The tool–suite developed supports the methodology by, for example, automating tasks in order to scale the analysis. The use of the methodology and tool–suite is then demonstrated through three examples. These examples include a partial evaluation of the methodology and tool–suite.
Results: Our literature review of credibility assessment identified a set of criteria that have been used in previous research. However, we also found a lack of definitions for both the criteria and, more generally, the term credibility. Credibility assessment is a difficult and subjective task that is particular to each individual. Research has addressed this subjectivity by conducting studies that look at how particular user groups assess credibility e.g. pensioners, university students, the visually impaired, however none of the studies reviewed software engineering researchers. Informed by the literature review, we conducted a survey which we believe is the first study on the credibility assessment of software engineering researchers. The results of the survey are a more refined set of criteria, but also a set that many (approximately 60%) of the survey participants believed generalise to other types of media (both practitioner–generated and researcher–generated).
We found that there are significant challenges in using blog–like content as evidence in research. For example, there are the challenges of identifying the high–quality content from the vast quantity available on the web, and then creating methods of analysis which are scalable to handle that vast quantity. In addressing these challenges, we produce: a set of heuristics which can help in finding higher–quality results when searching using traditional search engines, a validated list of reasoning markers that can aid in assessing the amount of reasoning within a document, a review of the current state of the experience mining domain, and a modifiable classification schema for classifying the source of URLs.
With credibility assessment being such a subjective task, there can be no one–size–fits–all method to automating quality assessment. Instead, our methodology is intended to be used as a framework in which the researcher using it can swap out and adapt the criteria that we assess for their own criteria based on the context of the study being undertaken and the personal preference of the researcher. We find from the survey that there are a variety of attitude’s towards using grey literature in software engineering research and not all respondents view the use of grey literature as evidence in the way that we do (i.e. as having the same benefits and threats as other traditional methods of evidence gathering).
Conclusion: The work presented in this thesis makes significant progress towards answering our research question and the thesis provides a foundation for future research on automated quality assessment and credibility. Adoption of the tools and methodology presented in this thesis can help more effectively and efficiently search for and select higher–quality blog–like content, but there is a need for more substantial research on the credibility assessment of software engineering researchers, and a more extensive credibility model to be produced. This can be achieved through replicating the literature review systematically, accepting more studies for analysis, and by conducting a more extensive survey with a greater number, and more representative selection, of survey respondents.
With a more robust credibility model, we can have more confidence in the criteria that we choose to include within the methodology and tools, as well as automating the assessment of more criteria. Throughout the re- search, there has been a challenge in aggregating the results after assessing each criterion. Future research should look towards the adoption of machine learning methods to aid with this aggregation. We believe that the criteria and measures used by our tools can serve as features to machine learning classifiers which will be able to more accurately assess quality. However, be- fore such work is to take place, there is a need for annotated data–sets to be developed.