Text originally published at Peer-Produced Research Lab on 18/05/2021
Gary Wolf, journalist, researcher and contributing editor at Wired magazine since its beginnings, is the co-founder of Quantified Self, an international community of users and makers of self-tracking tools who share an interest in self-experimentation and “self-knowledge through numbers”. Gary is also one of the board directors of the Open Humans Foundation, and active contributor of the Open Humans self-research community. We interviewed Gary during a data collection process for a study on motivations, learnings and peer support in self-research communities (more info here), and since Gary gave permission to use his non-anonymized data (“I’m a journalist!”) we reproduce our conversation here, with minor edits.
Thank you Gary for accepting being interviewed for our study, and for the opportunity to know firsthand about your perspectives on personal science and self-research communities like Quantified Self and Open Humans. We would like to start by knowing a little bit more about your background and what was your initial experience regarding self-research.
I am a science journalist, and I have had a long career reporting on and writing about science and technology. Around 2005-2006, I was on a Knight Journalism Fellowship here in the US and I began to do some research and reporting about ways in which people were using computing to explore fairly intimate aspects of their own personal life, including things like how their memory functions. At the same time there were a lot of new technologies emerging that were bringing computing very close to people, to their daily activities. This was a time when GPS geolocation was being incorporated into many popular tools, and things like digital self-tracking devices were just beginning to emerge. And it was also a time when the focus of the technology community was very much on social media. If you remember this was when Facebook was taking off. So everybody was looking at these kinds of social technologies. With one of my WIRED magazine colleagues, Kevin Kelly, who was my editor there at the time, our attention started to go in a different direction towards the highly personal, highly individualistic and often private uses of technology. Instead of being a relationship between yourself and countless others, it was about the relationship between yourself and yourself. So this began out of curiosity. Kevin and I decided, almost as a research method, to convene a group of people, some of whom we knew, some of whom we didn’t know, who were using these technologies to explore their own personal questions.
Like a self-organized focus group in a way, would that be the correct equivalent?
Well, a focus group tends to be something usually associated with marketing and developing products. This was really more like journalistic reporting. Kevin and I both had experience with technology users groups in the Bay Area. I had been a member of the Macintosh users group in Berkeley, in the 80s, which was an important group for sharing ideas about how to use this new tool, the Macintosh, soon after it came out.
It was more a user experience type of approach, in a way, or not necessarily like that?
Let’s see. This is an important distinction because if we say user experience or if we say focus group, the emphasis is on people creating technology using feedback from users to optimize or develop new technologies. But Quantified Self comes from a different tradition in which users are developing tools and approaches to serve themselves. There’s a relation in the history of development of technology between early users and developers of digital tools, and there is a lot of overlap, but a users group is serving itself first rather than the developers. We were very aware of this tradition and part of it. Our first meeting was at Kevin’s studio in Pacifica, California, there were a couple dozen people there. We two had planned just to discuss how we were using these technologies and we were just standing around, getting ready to start, and somebody walked in the room, the last person to arrive. Kevin just said to him “well you came in last, you go first”, when we were about to start a round of introductions. And instead of introducing himself in a kind of “normal” way, he said “I guess I’ll just show you what I’ve been doing for the past year”. Then he showed us a very detailed time diary, broken down into 15 minute increments of how he had spent his time over the last year. This was very exciting, it was like a moment of discovery! So we were very happy and interested to see this project, and afterwards other people started sharing their projects. That’s how people started to share their projects and give little talks, it was all very spontaneous. And then there was somebody there from New York, a teacher and designer in interactive technology, and he decided to have a very similar type of meeting in New York. All of this came out of our first exploratory meetings, there was no plan at first to create a network or community, or anything like that.
It sounds like there was probably some sort of precondition for this, it was neither by casuality. Like a thing that happened in the right moment with the right people, whoever was there, in the spirit of unconferences.
Yes, very much, it was a very interesting moment where just, you know, we were partially planning, partially lucky. So we had several more meetings in the next year. And then we faced a serious challenge, you could almost call it a “threat”, to the experience that we wanted to have. This area that you could also call “very personal computing” was growing as a technology sector, and there was a lot of investment coming into it. So at the meetings we started to have some people getting up and speaking but they weren’t really talking about their own personal experience, or about learning about themselves using these technologies. You could say they were the same type of people, but they had a very different expectation from the meeting. Although the meetings always had tech-oriented people, who had a general interest in novelty and new technologies, the majority were talking about themselves. But it got to a point when it became more common to share business news with the expectation that Quantified Self was an entrepreneurial group that would be interested in pitches for software or hardware related to self-tracking. And as you know this usually involves a little bit of bragging, a little bit of speculation and a little bit of salesmanship. But that was not what we set out to do, there’s plenty of other places where people can do this. And other people, who were doing what we came to call self-research, began to complain and express disappointment. We realized we had to find a way to state the expectations that this was a meeting where people could share their own techniques and their own self-research skills, and could confide them to each other. This is where the “Show & tell” format came from. I was wondering “What was good about these first meetings? What did we like when this person walked into the room and shared his diary? Why did we like that?”. I thought, it’s a lot like a show and tell day in elementary school. It’s personal and informal, and it’s about something that you care about. So I wrote up a short document to explain the format, and attached it to the invitation to the meetings. And I made one more design change, suggested by a friend of mine who had been observing. He told me: “you just need to turn your chair around and keep checking the group, and if things go off track just say something”. And that’s what I started to do. I just turned my chair around and if somebody started to pitch their company, I would say “look, this is very interesting but we’re doing something different here. If you really like this technology that you’re selling, why don’t you tell us how you use it yourself?”. And this really worked, creating a kind of ethic in the group that spread even to the other groups. Something that I find especially interesting is that this ethic works to hold the group together even though it is not always obeyed. I would be surprised if it was obeyed more than 60% or 70% of the time. But it turns out that you can have a rule that is generally known but only partially obeyed. It will still function to support the community. And this was a very important thing because it allowed the quantified self community to sort of have an identity, while also not being too oppressive or constraining
And we are talking about a project that as you described started more than a decade ago. How do you remember it now looking back at its beginnings?
What’s been absolutely fascinating during that decade is that we’ve been through at least one full cycle of “boom and bust” in the industry, because at some point the wearables industry became like this huge dominant technological trend. And then it had these giant disappointments and everyone was “well, wearables are over”. And you know, in a way, we experienced the consequences of that bust ourselves, it affected us. But it didn’t destroy us because we were never fully identified with just the technical culture. And now over the last four years it’s been a period of consolidation, reflection, and translation of what was happening in the early years into a new form, as the industry has changed and left us kind of in a different place. The industry has moved on to other things, as some sort of a natural evolution. We managed to navigate it, and hold on to our core idea, which is that you can use empirical observation to reason about your own questions.
Please tell us more about how you discovered the Open Humans project back in those days, when there started to be lots of other people and organizations interested in the Quantified Self community.
I came across Open Humans originally from an exploration of the implications of Quantified Self for public health. I had many friends and colleagues who were working on how self-collected data could be used to support health generally. As a dominant theme, a lot of the projects that people were doing in the Quantified Self community had to do with health. So it seemed perfectly logical that other people who cared a lot about health would be looking to people’s self-collected data and how to use it to improve public health. It’s logical, yes, but it turns out to be extremely difficult. We were quite naive at the beginning. We got a grant to try to contribute in that direction, to make a bridge between the technology companies that were supporting self-collected data (like Fitbit, RunKeeper, Misfit or other ones like Jawbone) and academic and clinical research. Especially academic public health research. And one of the things that public health researchers wanted to do was to access the self-collected data that people were tracking in their own personal lives. They wanted to use it to improve public health, but it was really difficult to access such data. So we spent four years working on this, with very excellent collaborators and senior experts across industry, foundations and science. But in the end our impact was minimal. It was a very difficult challenge. I know the difficulties better than I would like to know them! This will sound a bit abstract, but in my view the fundamental problem is one of conflicting epistemologies. Public health cares about populations. Self-research care mostly about a single individual. And this misalignment has an effect on everything: tools, protocols, methods, incentives, and modes of communication. And because there are hardcore interests at stake, including money, careers, and institutional power can’t just get a group of well meaning people together and agree on something. We had the right people in the room. And they pretended to agree. But after many discussions and research papers and proposals and agendas, the world stayed more or less the same. To me it would be unbearable to pretend otherwise, because it would mean that after all that work, we hadn’t learned anything! But we did learn something, which is that you can’t talk your way to a solution. That’s just not the way, that’s just not going to solve anything. But Mad Ball came to a number of those meetings to present Open Humans, which had a unique model as a data store and a project platform with individual control and community governance. Nobody else was thinking about the problem the way Mad and others at Open Humans were thinking about it. Over time, Mad and then Bastian Greshake Tzovaras, and core contributors to the Quantified Self community have done many projects together and we’ve all been learning a lot. Fundamentally, our approach now is to make more than to talk.
This sounds like a process in which, apart from benefiting other self-researchers, you were also the beneficiaries as well, learning through different stages while developing the core vision and tools of Open Humans…
For me, the great value of Open Humans lies in its focus on supporting the discoveries, workflows and permissions associated with individual self-research; from which point it becomes possible to build upward into collective knowledge. The aim is not on a big data aggregate, but, instead, on supporting collaboration and discovery. For example we had a project (QS Blood Testers) with one of my colleagues, where basically about 20 of us were measuring our blood cholesterol as often as once an hour. And when we were planning that project we realized we needed to have a data store that was accessible to the key research collaborator, Azure Grant. And that had to be both ethically legitimate in terms of its privacy design, and able to prioritize individual participant agency. We needed a system that allowed individual participants to withdraw at any moment from the project and take their data with them. But most academic data stores are not set up to prioritize individual agency this way. Instead of trying to convince institutional decision makers that we knew what we were doing, and getting lost in endless discussions at a high theoretical level, we were able to use the Open Humans infrastructure to help us develop the appropriate workflows.
This seems in quite clear connection to open source principles and practices, and the way learning is embedded usually in those processes.
I think it is important to keep in mind that this is an intense learning process, among a relatively small group of dedicated people who are learning very quickly by doing things, discovering what actually works. So there’s this back and forth between what is the theory, but also what does really work in practice? And when you work like this to accomplish a given project, it’s almost upsetting how quickly you go past the conventional wisdom of how things should work: finding out that some things don’t work the way you think they should, so you have to find a different way to make them possible.
And in parallel you were activating the network properties of the community, as a distributed organization in some way, resilient and creative, right?
There’s this quote that I just remembered from a person I follow on Twitter named David Chapman who described API documentation as a genre of “fantasy fiction”. When we started, we shared an assumption with many people working with self-collected data that much of the work had already been done. You start from a place thinking “well, let’s just build some pipes that go from a wearables API here into a data store. And then once we have the pipes hooked up, we’ll have a bunch of data that we will connect to different projects, and everybody in the community can use it.” You think that’s going to work. But as soon as you look at what actually happens, you find out you have 50 times more work to do than you initially thought —and this work never ends. The most important lesson is that there isn’t a one-size fits all approach, there’s no such thing as “all the data”. How you handle data flows depends on your specific goal, and where the specific goal is set by individuals and small groups of collaborators, you have to focus on supporting them, rather than on fantasy fiction universal data aggregates and API hookups. I had many conversations with Mad over the months and years, contributing to their attempt to navigate and shift the mission of Open Humans from one that was focused on serving academic researchers to one that served the interests of participants. Mad turned Open Humans into a place of participatory science in which the agency of participants (and not just their agency as contributors, but their agency as thinkers) was prioritized in the design of the system. And that is a very different orientation, involving a collective learning process.
This type of network learning or community learning, growing by personal curiosity and collaboration, seems it was based on some sort of shared goals among peers from what you say. And also as an entity which is “alive” to some extent, evolving organically…
Yes. Technologies that can work for many people embody a lot of learning. Eventually this learning becomes implicit in the technology, it becomes something that is assumed, almost invisible. But in developing these technologies, actual people have to learn things they don’t know yet. That learning process is very usually poorly documented. Michael Polanyi, who wrote the book Personal knowledge, asked a very good question in that book: “what happens when a country decides, based on its scientific policy or its development policy, that they want to encourage the study of a certain branch of science?” For instance, physical chemistry. What they do is they hire somebody from another country where they already do physical chemistry to join them and start a department. Why don’t they just read the literature? Science disseminates its results, right? Why can’t they just read that literature and open a lab and just put some smart people in that lab to go through the steps to learn how to do physical chemistry? He says, this is impossible, because most of what is involved in doing physical chemistry is not written down and cannot be derived from the published results. In my view, the same thing is true of personal science, the use of formal empirical methods to address personal questions. We know some people can do it. We have been doing it. But we want more people to be able to do it, and for this we have to something that is quite different: we have to help them learn, we have to figure out what elements need to be made explicit, and we need to provide them with tools and support so they don’t have to reinvent the wheel. Here I think we get to what Open Humans is now still trying to do, which it hasn’t fully succeeded in doing yet, and is the current challenge: to be used by many people. It’s not reasonable to request or require that all the people who use it have the same level of technical skills and experience as the early contributors, that is not going to work. What has to happen is that some part of that knowledge and experience has to be embodied in tools, where it becomes implicit. And other elements of the knowledge and experience have to be transmitted by materials and personal interaction. Knowing how to do that, and doing it—it happens in this existing community that grows through shared practice and imitation. How fast can we go? We’re impatient, but the truth is we don’t know.
Now changing the focus a bit, we would like to know how you see the connection of citizen science and personal science. Do you think we are talking about the same thing?
To me citizen science is defined by the involvement of non-professional in significant parts of scientific activity. So it’s a very broad term, because there can be different types of involvement. What I call personal science is the use of empirical, scientific methods to explore personal questions, your own personal questions. The fundamental control of the research process is in the hands of the person doing the self-research. In contrast, in citizen science, the citizens are usually considered as “non-professional scientists” who are making contributions to answering questions that emerge from a professional scientific discipline Where you see citizens involved in science but not directing the agenda you have citizen science, but you don’t have personal science. However, what unites personal science with all varieties of citizen science is a common commitment to democratic participation in scientific culture. That’s what we share.
And to what extent do you consider personal science has to do exclusively about yourself?
I would describe it as a set of concentric circles in which your own individual questions are at the center, but of course we care deeply about other things in our personal life besides our individual self. For example we may be doing caregiving for somebody whose unanswered health questions are vital to us, and we’re thinking about our kids, parents, neighbors, etc. When we use empirical approaches to help us better care for people close to us, I would still call that personal science. But if I were to ask a question, like, “how can my data contribute to a better understanding of the effect of noise on cities,” or something like that, I would not call that personal science, that’s citizen science.
Regarding community-based participatory research, in that sense, how do you see its relationship or differences with personal science?
We’re all parts of a larger family of democratizing knowledge practices. I’ll give you an example of how community science and personal science join sometimes. There are lots of communities concerned about emissions from power plants and toxic pollution in the atmosphere. And one way to do citizen science in that context is to have people engaged in air monitoring of pollutants around their house, and contribute this data to a regulatory agency of control. However (and here is where you get that clash of epistemologies between public health and personal science), none of the standards or the regulatory guidelines about toxic pollution consider the complex effects of, for instance, intermittent pollution on people who already have asthma. You’re never going to get data like that into the civic debate about pollution because it’s very noisy. Companies will say “levels have been on average below the regulatory requirement every single day for the last five years.” And I can come and say “look, I can find some hours in which the levels were elevated, even if the average remained low.” And for a person who suffers from long term chronic conditions, these peak elevations may be a big problem. So personal science argues that individual results matter; and community-based participatory research wants to assert the legitimacy of these results too, but the administrative and regulatory frameworks don’t allow it. Here you see the clash, and not one that will be easily resolved.
By Enric Senabre Hidalgo and Morgane Opoix.