Pruning open-archaeo

I’ve been working with Joe Roe to analyze open-archaeo to better understand the community of practice around archaeological software development. This prompted us to go back and remove records that are arguably out of scope. We identified a few dozen items that don’t really fit with our objective of assembling software made by archaeologists for archaeological purposes. Many of these are general-purpose tools that happen to be relevant to archaeological use-cases, and whose contributors and primary maintainers are largely, if not entirely, comprised of non-archaeologists (e.g. xraylib, QField). Others deal with specific processes that form parts of methods from other disciplines that archaeologists have come to work with and rely on, such as genetics, ecology and earth science, but which seem too distant from archaeology to warrant inclusion in open-archaeo.

It’s a bit jarring to make such a big update — especially one that removes around 15% of the dataset — so soon after we published a data paper about it. However, that paper was meant to communicate the data collection methods, the data structure, and the rationale, purpose and value that underlie open-archaeo and its continued development. We have always been very clear and upfront that this is a live project, but it’s still a bit awkward trying to align our work with more traditional forms of scholarly communication that are suited for more stable outcomes than what continuous and open-source projects afford.

You can see the changes in this git commit. Don’t hesitate to get in touch if you have any questions or concerns. This is still a community effort and I could not do this without all of your contributions 🙂

ArcheoFOSS XVII

This week I participated in ArcheoFOSS in Turin, Italy. I’ve always been keen to present at this conference but somehow never really felt I had much important to say (aside from open-archaeo stuff, but Joe Roe and I already presented about it at the 2021 CAA conference, and more detailed analysis is still in the works). But this year Joe and I took the opportunity to co-lead a panel on archaeology and the fediverse based on our experiences administrating and moderating the archaeo-social mastodon instance. Our panel was meant to highlight key challenges and opportunities for collectively-owned and community-led scholarly social media, and while it only consisted of a few papers, it definitely got the ball rolling on further critical discussion regarding the role of the fediverse and decentralized communication protocols in online archaeological discourse. Joe and I are initiating work on a position paper that assembles the main ideas presented during the panel and subsequent discussion, so stay tuned for more on that. In the meantime, you can access our introductory remarks on zenodo and github.

I also presented a paper on the challenges I experienced integrating and reusing data during my Master’s thesis, which I completed 8 years ago, basically summarizing its failures (trying to channel Shawn Graham’s Failing Gloriously). This basically served as a venue for finally presenting my long-held yet unpublished uncertainties about the value of analyses that integrate legacy data, drawn from my personal experiences.

I really appreciated how low-key and relaxing the conference was. It was great to just have a casual experience with a relatively small group of like-minded researchers. I was very fortunate to be able to travel to Turin and participate in person. I also went on a nice post-conference excursion to Genoa, which is a truly lovely city. Thanks to Stefano Costa for informing me about the best places to visit and eat!

Mole Antonelliana, Turin
Po River, Turin
Po River, Turin
Sunset in Genoa

Finished my dissertation!

I finally defended my doctoral dissertation a few weeks ago, and after 7 years I’m happy to put it out into the world: https://doi.org/10.5281/zenodo.8373390

To briefly summarize: I observed and interviewed archaeologists while they worked, focusing on how they collaborate to produce information commons within relatively small, bounded communities. I relate these observations to issues experienced when sharing data globally on the web using open data platforms. This is part of an effort to reorient data sharing (and other aspects of open science) as a social, collaborative, communicative, and commensal experience.

Many thanks to my supervisor, Costis Dallas, for being such a great mentor, and to Matt Ratto and Ted Banning for their constant constructive feedback. And special thanks to the external examiners, Jeremy Huggett and Ed Swenson, for critically engaging with my work.

Archaeological data work as continuous and collaborative practice

This dissertation critically examines the sociotechnical structures that archaeologists rely on to coordinate their research and manage their data. I frame data as discursive media that communicate archaeological encounters, which enable archaeologists to form productive collaboration relationships. All archaeological activities involve data work, as archaeologists simultaneously account for the decisions and circumstances that framed the information they rely on to perform their own practices, while anticipating how their information outputs will be used by others in the future. All archaeological activities are therefore loci of practical epistemic convergence, where meanings are negotiated in relation to communally-held objectives.

Through observations of and interviews with archaeologists at work, and analysis of the documents they produce, I articulate how data sharing relates distributed work experiences as part of a continuum of practice. I highlight the assumptions and value regimes that underlie the social and technical structures that support productive archaeological work, and draw attention to the inseparable relationship between the management of labour and data. I also relate this discursive view of data sharing to the open data movement, and suggest that it is necessary to develop new collaborative commitments pertaining to data publication and reuse that are more in line with disciplinary norms, expectations, and value regimes.

open-archaeo data paper

Today, Joe Roe and I published a data paper in the Journal of Open Archaeology Data on open-archaeo, the comprehensive list of open source archaeological software and resources that we maintain. In this paper, we outline the data collection methods and conceptual model, and highlight open-archaeo’s value as a public resource and as a dataset for examining the emerging community of practice surrounding open source software development in research contexts. In fact, open-archaeo serves as the basis for an extended dataset in a study we are currently working on (investigating collaborative coding experiences) and we think there is a lot of potential for additional analysis in the future.

Open-archaeo: A Resource for Documenting Archaeological Software Development Practices
https://doi.org/10.5334/joad.111

Open-archaeo (https://open-archaeo.info) is a comprehensive list of open software and resources created by and for archaeologists. It is a living collection—itself an open project—which as of writing includes 548 entries and associated metadata. Open-archaeo documents what kinds of software and resources archaeologists have produced, enabling further investigation of research software engineering and digital peer-production practices in the discipline, both under-explored aspects of archaeological research practice.

Conceptual model documenting relationships between data recorded in open-archaeo and other relevant information in the source material and elsewhere on the web.
Conceptual model documenting relationships between data recorded in open-archaeo and other relevant information in the source material and elsewhere on the web.

Some thoughts on data formality

I’m using this post to draw out some thoughts that I feel are coherent in my mind but I struggle to communicate in writing. The general topic is the notion of formality, and how it is expressed in data work and data records.

In a management sense, formality involves adhering to standard protocol. It involves checking all the boxes, sticking to the book, and ensuring that behaviour conforms to institutional expectations. In this way, bureaucracy is the essence of formality. By extension, formality is a means through which power is expressed, in that it binds interactions to a certain set of acceptable possibilities. In effect, formality renders individual actions in subservience to a broader system of control.

But formality is also useful. Formality reduces friction involved in transforming and transmitting information across contexts. Any application that implements a formal standard can access and transmit information according to the standard, which reduces cognitive overhead on the part of actors responsible for processing information. They relocate creative agency upstream, towards managers of data and of labour, who make decisions regarding how other actors (human and non-human actors alike) may interact with the system before they ever occur. This basically manifests itself in workflows, which are essentially disciplined ways of working directed towards targeted outcomes (I wrote about workflows in a 2021 paper and in my dissertation, which draws from Bill Caraher’s contribution to Critical Archaeology in the Digital Age, among other work he’s written on the topic). To be clear, I do not mean to imply that adopting workflows constitutes a negative act. An independent scholar may apply a workflow to help achieve their goals more effectively and efficiently, and empowers them to get the most out of the resources at their disposal. However, one of the key findings from my dissertation is that when applied in collective enterprises, they tend to genericize labour and data for the purpose of extraction and appropriation, which is understood to be an ordinary aspect of archaeological research, as is evident by how actors performing genericized labour internalize this as part of their work role.

Developing a workflow essentially entails adopting and enforcing protocols and formats, which are series of documented norms and expectations that ensure that information may be made interchangeable. Protocols are standards that dictate means of direct communication, and formats are standards that dictate how should be stored. Forms are interfaces through which information is translated from real-world experiences into standardized formats.

Formal data are information whose variables and values are arranged according to a formally-defined schema. A formal dataset comprises a series of records collated in a consistent manner, motivated by a need, desire or warrant to render them comparable. The formally-defined schema makes this potential for comparison much easier. A common means of representing formal data is through tables, which are comprised of rows and columns. Each row represents a record, and each column a variable that describes a facet of each record. The values recorded for each variable constitute observations or descriptive characterizations pertaining to the object of each record. One can therefore determine what kinds of structured observations were made about a recorded object by finding the values located at the intersection of records and variables (i.e., individual cells in a table). Each record relates to a set of variables applied to the whole set and documented in the schema.

In its most extreme, formality entails a realm of total control, where all information is collected and processed according to an all-encompassing model of the world. It is not coincidental that models are the primary outlooks through which both managers and computer systems engage with the world. It has been the dream of bureaucrats and computer scientists alike to develop such systems (see the work of Paul Otlet, Vanevar Bush, and the weirdly techno-libertarian crowd associated with structured note-taking and personal knowledge management). Nor is it a coincidence that formality is a requisite aspect of both bureaucracies and computers. Computational environments and bureaucracies both effectively capture and maintain institutional power dynamics.

In some cases, such as with text boxes, the variable may be precisely defined by the values are left open-ended. However, users are still expected to provide certain kinds of information in these fields (I remember at a conference in 2019, Isto Huvila (whose work on archaeological records management is also a great source of inspiration) referred to these as “white boxes”, which conveys their literal appearance and is a clever and ironic play on words referring to the notion of “black boxes” that hide the intricate details of a process behind an opaque connotative entity). In this sense, the standards are thus mediated by social and professional norms, but exist nonetheless. This reflects the fact that social and professional norms, standards, and expectations will never go away, they are fundamental aspects of communication and participation within communities.

The million dollar question nowadays (at least in my own mind) is how can we create information infrastructures that strike a balance between the need to transmit information succinctly between computers via the web, and the capability to share context and subtext whose significance originate in the gaps between recorded information and which gain meaning only in relation to the shared experience of communicating agents as members of a social or professional community?

Recap: Digital Archaeology Bern 2023

Last week I travelled to Switzerland to participate in Digital Archaeology Bern (2023). The conference was themed “advancing open research into the next decade” and served as a way to take stock of developments since the 2012 World Archaeology Special Issue on Open Archaeology and Ben Marwick’s influential 2017 paper Computational Reproducibility in Archaeological Research, which came out 10 and 5 years ago, respectively. I think that the conference was a remarkable success, and all 50-60 participants were actively engaged in critical discussions on what it means to do open archaeology. You can find my slides and presentation notes on GitHub (https://github.com/zackbatist/DAB23).

Although there were some elements of this, the conference was not just superficial open-boosting. Most, if not, all participants highlighted challenges and unanticipated implications of being open that they have recently experienced. Looking back, a few themes stood out:

  • Thinking about value proposition that openness entails, which necessarily involves accounting for specific use cases and imagined future stakeholders.
  • Thinking about the needs and values of all stakeholders involved in doing archaeology, including local and Indigenous communities, land-owners, archivists, government agencies, and related parties, and what openness means for them.
  • Thinking about how we might reconcile our values as archaeologists with the values demanded and afforded by the infrastructures and communities with whom we must work.

I got to meet so many interesting people. I already knew many of them from social media, virtually-hosted talks, or brief in-person interactions at the CAA back in 2018, and it was really great to put a face to each person’s name. Most serious work in digital archaeology, especially productive work developing open data infrastructures, is being done in Europe, and I was very grateful to have this opportunity to connect with that crowd (especially since I’m currently entering the post-PhD academic job market). I think my paper was well-received and valued, and it opened the door to many interesting discussions during the breaks between sessions and elsewhere.

I was also able to tack on a couple days at the start to work with Joe Roe on an article we’ve been writing for the better part of 3 years, about collaborative aspects of open source software development among archaeologists. We presented a paper at the 2021 CAA conference on the composition of open-archaeo, the list of open source software and resources made by and for archaeology that I maintain, and we’re trying to expand on it a little bit more with some network analysis type stuff. So this time together really gave us an opportunity to discuss what we really want out of the paper, to actually talk through the results, and generally helped motivate us to get this done. We still have some work cut out for us, but that probably warrants its own blog post.

Anyway, here are some cool pictures from the trip!

I like LaTeX

So I think I finally understand LaTeX. Of course there’s still a lot for me to learn, but I think I’m at a point where I am really harnessing its true value.

I’ve been working with plaintext since I started writing my dissertation. Until very recently my workflow closely resembled an RMarkdown setup, and largely corresponded with this guide written by Ben Marwick. My simple understanding is that Pandoc passes the Markdown through LaTeX to produce a viable PDF, which makes it possible to scatter LaTeX throughout the content and in the YAML front matter. So I had a mish-mash of both Markdown and LaTeX conventions in most of my documents. For instance, I was using the comprehensive graphicx package to render my figures and I was referring to endnotes stored in a separate tex file using the sepfootnotes package, all while using Pandoc’s citeproc bibliographic referencing system. Eventually I came to realize that my workflow was fundamentally built upon LaTeX, or comprises functions that closely correlate with common LaTeX macros.

I worked using this hybrid Markdown/LaTeX setup for years, until last week when I was prompted by a member of my supervisory committee to compile a unified document so he could better understand the flow of my thesis. I had anticipated that I would need to convert everything to pure LaTeX at some point, and it was as good a time as any.

Previously, I had mashed together code from various Stack Overflow posts in order to generate something functional, but it’s a completely different experience when you start from scratch. I started by following the Overleaf guide on how to structure a thesis using LaTeX, and now I have a very robust and elegant setup that compiles a single, tidy, and systematic PDF from multiple sources. Here’s the current state of my main tex file:

% Page layout %
\documentclass{report}
\usepackage[margin=1in]{geometry}

% Line and paragraph spacing %
\usepackage{setspace}
\doublespacing
\setlength{\lineskip}{3.5pt}
\setlength{\lineskiplimit}{2pt}
\setlength{\parindent}{20pt}

% Table of contents %
\usepackage{tocloft}
\setlength{\cftbeforesecskip}{4pt}
\usepackage[page,titletoc,title]{appendix}

% Verbatim %
\usepackage{fvextra}
\DefineVerbatimEnvironment{Highlighting}{Verbatim}{breaklines,commandchars=\\\{\},breaksymbol=}

% Data references %
\usepackage{sepfootnotes}
\newendnotes{x}
\newendnotes{y}
\newendnotes{z}
\input{refs/XXXX-refs.tex}
\input{refs/YYYY-refs.tex}
\input{refs/ZZZZ-refs.tex}
\input{refs/misc-refs.tex}
\renewcommand\thexnote{A\arabic {xnote}}
\renewcommand\theynote{B\arabic {ynote}}
\renewcommand\theznote{C\arabic {znote}}
\usepackage[multiple]{footmisc}

% Figures and text boxes %
\usepackage{graphicx}
\graphicspath{{images/}}
\usepackage[font=footnotesize,labelfont=bf]{caption}
\usepackage{float}
\usepackage{mdframed}
\mdfdefinestyle{quotes}{
linecolor=black,linewidth=1pt,
leftmargin=1cm,rightmargin=1cm,
skipabove=12pt,skipbelow=12pt
}

% Block quotes %
\usepackage{csquotes}

% Tables %
\usepackage{tabularx}

% Epigraph %
\usepackage{epigraph}
\setlength{\epigraphwidth}{4in}
\renewcommand{\textflush}{flushright}
\renewcommand{\epigraphsize}{\footnotesize}
\setlength\epigraphrule{1pt}

% Bibliographic citations %
\usepackage[
citestyle=authoryear,
bibstyle=authoryear,
maxcitenames=2,
maxbibnames=99,
uniquelist=false,
date=year,
url=false
]{biblatex}
\addbibresource{/Users/zackbatist/Dropbox/zotero/zack.bib}

\renewcommand*{\postnotedelim}{\addcolon\space}
\DeclareFieldFormat{postnote}{#1}
\DeclareFieldFormat{multipostnote}{#1}

% Hyperlinks %
\usepackage[colorlinks=true]{hyperref}
\hypersetup{
linkcolor=black,
citecolor=blue,
urlcolor=blue
}

% Title page %
\title{
{Thesis Title}\\
{\large University of Toronto}\\
{\normalsize Faculty of Information}\\
% {\includegraphics{university.jpg}}
}
\author{Zack Batist}
\date{\today}

\begin{document}
\maketitle

% Front matter %
% \chapter*{Abstract}
% \chapter*{Dedication}
% \chapter*{Declaration}
% \chapter*{Acknowledgements}

\tableofcontents
\listoffigures
\listoftables

% Content %
\chapter{Introduction}
\input{chapters/introduction.tex}
\chapter{Notions of Archaeological Data}
\input{chapters/notions-of-archaeological-data.tex}
\chapter{Theories of Discursive Action}
\input{chapters/theories-of-discursive-action.tex}
\chapter{Methods}
\input{chapters/methods.tex}
\chapter{Social Worlds}
\input{chapters/social-worlds.tex}
\chapter{Sites of Discursive Negotiation}
\input{chapters/sites-of-discursive-negotiation.tex}
\chapter{Sociotechnical Tensions Relating to Data}
\input{chapters/sociotechnical-tensions-relating-to-data.tex}
\chapter{Discussion / Future Directions}
(in progress)

% Bibliograpy %
\printbibliography[heading=bibintoc]

% Appendices %
\begin{appendices}
\chapter{Summary of Code System}
\chapter{Data Management Protocols}
\chapter{Open Data Supplement}
\section{Case A}
\thexnotes
\section{Case B}
\theynotes
\section{Case C}
\theznotes
\end{appendices}

\end{document}

A large part of this work involves figuring out trends in the package ecosystem. I really struggled to differentiate between the various packages for bibliographic formatting, footnotes, figures and floats and their cross-compatibility. Some packages seem to be developed according to common tendencies, sort of like the R Tidyverse (in the sense of holding a generally common syntax, not in terms of cult behaviour), based around central cores such as biblatex and hyperref.

I’ve also been using LaTeX to format my CV, and today I started using it to create slides for an upcoming conference presentation. I like how programmatic the CV feels — you script a macro once, call the function along with the text you want it to parse, and you got a pretty little table of all your career accomplishments! I’m still kind of undecided about using beamer for conference slides, but I do like how it encourages me to write concurrently as I create the slides.

Comments on “The rise and fall of peer review”

A substack post about peer review is getting a lot of attention, and I’m here to rant about it. Basically the post is calling out the peer review process as a terrible and broken system. And it is. But the author’s rhetoric about it is kind of problematic.

1. Peer review is not an experiment.

The author claims that it is, but contradicts himself straight away:

The experimental design wasn’t great; there was no randomization and no control group. Nobody was in charge, exactly, and nobody was really taking consistent measurements. And yet it was the most massive experiment ever run, and it included every scientist on Earth.

These are not just things that make an experiment bad, they are things that preclude peer review from being an experiment altogether. Experiments are run on samples, they are run with intent, they are performed in controlled environments. As someone who calls himself an experimental psychologist and who calls his blog “experimental history”, he really extends the term experiment in weird ways. This use of the term is like referring to the “experiment of democracy”, basically just grand rhetoric for “we’re figuring things out and learning as we go”.

The author also seems to think of the experiment of peer review as a pass/fail test, which again, is not what experiments are for. He sets bars for what successful scientific evaluation ought to look like, and measures his experiences of peer review against it. But this is not an experiment, this is qualitative assessment. There’s nothing wrong with that, but it’s troubling how the author wraps his proclamation that peer review is bad and should be abolished within some phony hearkening to science-core.

2. General discourse is not enough to validate truth statements.

Various parts of the post indicate that the author considers science to be the evaluation of statements of truth, which can only be verified by their fidelity to observed reality. Ok, fair enough. But he refers to Einstein’s large body of non-reviewed work as an argument for relying on discourse among educated fellows as an efficient way of evaluating the quality of scientific work. Despite Einstein’s apparent genius, which is cemented in popular imagination but who also happened to be wrong about some things, this is not reason enough to abolish peer review. Moreover, the author does not consider the general acceptance of non-reviewed ideas that happened to be wrong as a counter point that clearly refutes his main point.

3. What about non-experimental methods?

The author has a huge blind spot for non-experimental methods. He suggests that if the results of a scientific analysis can be replicated, then that is good enough for acceptance into an authoritative cannon of truth. Moreover, he indicates that work that can not replicate is “a whole lotta money for nothing”, basically a waste of time and resources. But a lot of science can not be replicated, by virtue of the fact that science doesn’t always follow experimental protocols that allow for replication tests to be performed. Fantastic and valuable work that relies on non-experimental heuristics, including a lot of work in the social sciences and humanities, climate science, ecology, astronomy and various other fields, are left in the lurch. His take on non-replicability in these disciplines reads a lot like the unethical and ironically non-replicable Sokal hoaxes that serve as the basis for unhinged right-wing attacks on the social sciences and humanities.

This also contradicts the author’s hearkening to discourse among learned men of olde as a way of dealing with problems relating to peer review. Opening up the comments section, even if just limited to a curated list of credentialed scholars, is not the same as conducting independent replication studies. I think the reasoning behind this link is that if other people have experienced similar phenomena in their own labs, then it’s more likely to be accepted as true. But this is not the same as replication under the same conditions, it is just the same uncontrolled consensus-based evaluation criteria as peer review but with an open filter.

4. Peer-review in context

I agree with many of the things that the author is saying. Yes, there are many ways in which peer review is broken and could be improved. For instance, I agree with the notion that peer reviewers do not dive deep enough into the data and aren’t always critical enough. But I think that this is because most people are unprepared to do so, either because they do not have access to data or do not know how to work with statistics or read code. Moreover, certain journals like PNAS give preferential treatment to certain authors over others, and there are definitely major issues with racism and sexism in the evaluation process. Open peer review does not resolve these issues, namely because it treats peer review in isolation.

The only way to make peer review better is by instilling good scholarly practices in the next generation of scholars. However, this is inhibited by structural issues, such as the tight job market that favours quantity of peer reviewed articles over any other factor, and the general prestige economy of academia. These are the root issues. The foul state of peer review is one aspect of this mess, alongside structural racism, sexism and transphobia, the sheer expense of obtaining an advanced degree and excelling in the years immediately post-PhD, and the pressures to conform trends that get you funding. You can not separate the problems with peer review from these issues. Yet somehow the author manages to completely side step these concerns, identifying the broken peer review system as a purely epistemic problem, rather than a problem with tangible and far-reaching social implications.

Comments on a recent “science mapping” paper

A new paper examining published research outputs to describe the makeup of archaeology as a discipline just dropped, and it’s getting a lot of positive attention.

Sinclair, A. 2022 Archaeological Research 2014 to 2021: an examination of its intellectual base, collaborative networks and conceptual language using science maps, Internet Archaeology 59. https://doi.org/10.11141/ia.59.10

I see some issues with the paper that I think are worth addressing. This is not a comprehensive review, more like a commentary based on my own interests and experiences. I welcome dialog with the author and anyone else who is interested in discussing this further.

I’m a bit hesitant to post this because I do not know the author, Anthony Sinclair, and I don’t want to come across as too harsh. I intentionally did not look him up prior to writing this post. This is a commentary of the paper, not the person behind it.

Simplistic description of network graphs

My first criticism is about the surface-level description of the network visualizations. Network visualizations are one of many ways of rendering a dataset, and this would have really benefited from more multifaceted statistical analysis of the underlying data. For example, it would have been nice to see the distribution of nodes with different degrees of centrality compared against some other variable, such as gender. The author reverts back to a plain and simple citation count in his analysis of gender disparities, and misses a great opportunity to draw upon centrality measurements as a key indicator of inequitable aspects of professional development across the genders.

The author also annotated the graphs with diagrams that look kind of like a compass rose. I only found one instance in the text describing them and their function:

“In certain maps, the key dimensions that affect the layout of the maps are identified in one of the upper corners of the map.”

One of the network visualizations from the original paper. Note the compass rose in the top left corner.

What do these compass roses actually represent? Are they derived from the author’s interpretations, or are they derived from the dataset? This is unclear. In either case, I would have liked to understand the reasoning or approach for identifying the extremes at each end of the gradients, and how a node’s situation along the scale is determined.

Framing of science and non-science

This paper perpetuates an outdated dichotomy between science and the arts and humanities. It never really defined either of these things, or attempts to reconcile the terms used by the citations databases against their own notion of what science and arts and humanities means to them. But these terms appear in the compass roses and in their descriptions of the graph visualizations as if their meanings are self-evident.

Also very interesting is they say a lot about science but not much about arts and humanities. In fact, it may be more apt to say that this paper describes science and non-science, rather than some alternative other cohesive entity. The author describe journals, topics and methods that they identify as scientific, but do not do this at all for entities that they relate to as belonging to the arts and humanities. The lack of distinction between these terms reveals a lack of willingness to treat the things they represent as things in themselves rather than a lack of something, namely, science.

The paper also relies on really outdated visions of the character of various disciplines and of archaeology specifically. As far as I can tell, it relies on two sources:

  • Pantin, C.F.A. 1968 The Relations Between the Sciences, Cambridge: Cambridge University Press.
  • Becher, T. and Trowler, P.R. 2001 Academic Tribes and Territories, 2nd Edition, Maidenhead: Society for Research into Higher Education/Open University Press.

Becher is extremely outdated and falls within a period when scientists (especially social scientists, including Binford) were aching to make their disciplines seem more scientific. So there is a strange value judgement at play, and they often failed to capture the reality of how science actually works. The other source is mentioned only very briefly in passing, but follows a similar essentialist rhetoric regarding the fundamental nature of specific disciplines, which rubs me the wrong way. A lot of excellent work that examines the pragmatic reality of scientific practice, which highlights contradictions and misrepresentations, and that presents the fluidity across disciplines rather than hard distinctions, is simply ignored (e.g. Latour and Woolgar’s Laboratory Life, Latour’s Pandora’s Hope, Knorr-Cetina’s Epistemic Cultures, Bowker’s Science on the Run, to name just a few).

Critical reflection on what the networks actually represent

The value of analyzing citation networks is unclear to me, and the author don’t really convince me that they represent a “window on the shape of the discipline”. Citation networks depict clusters of citations, but the jump to making these clusters meaningful in relation to some broader social or epistemic phenomenon is never really articulated. Moreover, the author indicates they he applied the Girvan-Newman method for identifying clusters, but doesn’t really incorporate the means through which this algorithm operates, including its limitations, into the analysis. Clusters do not simply exist, they are highlighted through some method, which impacts what we see.

After writing an initial draft I found that the author has done other relevant scientometric work with a more targeted scope:

Sinclair, A. (2020). From Specialty to Specialist: a citation analysis of Evolutionary Anthropology, Palaeolithic Archaeology and the Work of John Gowlett 1970-2018. In J. Cole, J. McNabb, M. Grove, & R. Hosfield (Eds.), Landscapes of Evolution: Studies in Honour of John Gowlett (pp. 175-201). Oxford: Archaeopress.

Although I do not have access to this paper, it is likely to be much more effective since these kinds of analyses tend to work better when a more specific objective is outlined, since it’s easier to ground the relationships within a specific set of experiences, rather than relying too much on generalizations and abstractions.

Analysis of language and push for standardized terminology

I like the analysis of language and keywords. I think it’s the strongest part of the paper, and there’s a lot of potential there. However the author draws this into a push towards standardization, which seems kind of forced and not relevant to the analysis of key words across the literature. The author frames the diverse array of terminology as a problem that needs to be overcome, rather than a very interesting aspect of archaeological research practice with its own benefits and affordances. Standards implemented in harder sciences are stated as goals worth attaining, but I’m left unconvinced that this is really worth doing based on the findings presented here.

Open science and its weird conception of data

In an early draft of one of my dissertation’s background chapters I wrote a ranty section about notions of data held by the open science movement that I find really annoying. I eventually excised this bit of text, and while it isn’t really worth assembling into any publication, I thought it may still be worth sharing here. So here is a lightly adapted version, original circa May 2022.

Continue reading “Open science and its weird conception of data”