Category Archives: Uncategorized

Doing the Thing (and lessons learned)

So far, I’ve run about 20 people through the protocol. I was chatting with a friend who also does research, and this is apparently a VERY fast pace. So that makes me happy!

I thought I would be bored by now, doing the same thing over and over, but I’m more excited every time! Some of the interviews are frustrating, watching people search for things in complete illogical ways, but I’m learning a lot. Some people have a lot to say and we get a lot of info; others are more random, hard to make sense of.

I know that I want to do more research in the future, and there are some things I learned in this project that will help me in the future:

  1. Know the timeline. I think a lot of what worked against me in people not responding was that we’re in the last month of the semester. Students are busy, music students are BUSY. And for my own sake, packing all of this into the end of the semester has not been easy. Next time, I’ll get the IRB application in much sooner.
  2. Do more pilot testing. I come up with new directions or ideas for interview questions a lot, but in order to maintain consistency, I try to stick to the script. I think that maybe if I had done more pilot testing, I could have realized those things sooner.
  3. Use a relational database. Keeping track of all the participants, demographic data, invites sent, etc. has been somewhat difficult because I’ve been using a flat Excel file with multiple tabs that all have to be updated. If I had a relational database, I could run queries to show only the data I needed at the time and have forms for input.

Signing people up, getting people in

Given the amount of the incentive that we were offering (and the number of people who expressed interest), I definitely wasn’t expecting to have any trouble getting people to show up or sign up for the interviews. Again, I learned something. In order to get 27 participants to actually show up, I think I invited almost 100 people.

I used a scheduled app called Calendly, which I highly recommend. You setup time blocks of availability, tell the app how long each meeting should be and at what interval they should be offered, fill in a few other details, and then you can send out a link to your participants. They just choose a time, everyone gets an email, and it’s that easy!

Many people I invited to sign up never did. A handful signed up but no-showed and never rescheduled. I just really can’t understand skipping out on easy money. But who knows!

Costs

Before we started the project, I thought it would mostly be about putting time in – running participants, analyzing data, writing, etc. But it turns out that conducting research can cost a good amount of money. We decided up front hat we wanted to offer a generous incentive: we were sort of asking a lot (a whole hour of interview and searching and me badgering questions) and we really wanted students who would be in it just for the money. Our thinking was that those students would be more average (read: representative) in their abilities–going back to the phenomenon of most research participants being high achieving. So for 28 people, that was a good chunk of change.

The next cost to consider was transcription. If you’re recording hour long interviews, you need to have them transcribed so they can be more easily analyzed. We thought about having students do it, but for speed and accuracy reasons, we went with hiring a service, which is a bit more expensive but a lot more reliable. Luckily, Misti was able to obtain a grant from the Indiana University Librarians Association (InULA) that covered a lot of the transcription.

One thing I never thought of was the cost of analysis software. You need pretty powerful software to analyze this kind of data, and I figured a place like IU would definitely have it. Well they do, but it turns out a license is pretty expensive and the department is expected to foot the bill.

For the most part, we were able to get everything covered. And now I see why grant funding on CVs is such a big deal!

Approved!

There was another round of back and forth with the IRB application (I forgot to upload an attachment—the system is kind of complex), and finally we were approved! Unfortunately it was the day before spring break, so we decided to wait until Tuesday after break (Monday emails tend to get lost) to start advertising to participants.

Best news so far: within the first HOUR, we had over 100 people sign up to do the study. That is Huge™

Misti and I anticipated that we would have a good response rate: music students tend to be more active/involved and apt to participate, but this was better than we could have hoped for. We ended up with about 225 completed surveys (people to choose from). Qualtrics made it easy to download and manipulate the data. I started with taking out people who weren’t really in the target groups, then randomly assigning ID numbers, and randomly choosing participants to invite for interviews.

A sample

One of the most ~daunting~ things about research is that everything you do early on has significant ripple effects for everything later on. Your sample, or who you choose as participants, is really important because it determines the data you will be able to collect and how reliable your results will be. If your sample is very narrow–everyone in it is very similar–your data will only tell you about that type of person. If the sample is too broad or imbalanced, it will be difficult to draw conclusions based on variables related to the people in the sample. For example, if you have a sample of 10, 8 women and 2 men, the data you collect will not be reliable in relation to men.

We have decided that our primary variable is the amount of information literacy intervention the student has had, which also generally corresponds to degree level. We’re also interested international students and first-generation college students. The plan is to pull a quota sample that equally represents all of these major characteristics. At the end of day, I’m not sure that there will be significant differences along the lines of international and first gen students, but I do believe it’s vitally important that those students are represented in this research. There is a phenomenon in social science research that participants tend to be white, educated, and high-achieving; it’s important to actively combat that.

IRB edits

Once all our ducks were in a row, we submitted our IRB application—a detailed questionnaire and six accompanying documents. It’s simultaneously terrifying and exciting! It took about a week and we got a response: they felt a few things were missing or unclear. We described a selection survey that collected demographic and contact information (we were planning on using the survey responses to select participants so we could have the sample we wanted); the use of it wasn’t completely clear and the reviewer wanted to specific what would be done with that data. The reviewer also felt that our search tasks fell into their definition of “benign behavior interventions”—just a checkbox. One of the reviewer’s notes did surprise me: they wanted to know how we were planning to send the email advertising the study; a little unexpected, but luckily our library director is also head of IT for the School of Music, so he has access to their email distribution lists. There were a few other small things on the study info sheet (the selection survey again(!) and the audio recording).

IRB

The reason there’s so much work on the front end of a study is IRB approval. The IRB (Institutional Review Board) oversees research involving human subjects at an institution (in this case IU) and we need their approval before we can start seeking participants or collecting data. Most research institutions have their own IRBs (or similar oversight office) and they exist to protect research participants from (un)intentional harm or risk. They have a somewhat dramatic history, but are largely routine now. A study like ours—very low risk and non-invasive—undergoes what is called Exempt Review, the lowest level. (Expedited is the middle level, and the highest is Full-Board Review.) Because we’re not collecting sensitive data (grades, assignments, GPA) and there’s no intervention involved, we just need to have our materials approved to make sure that we’re accurately representing our study and the data we collect. This includes the instrument itself (in this case a Qualtrics survey), any planned interview questions, the study information sheet (require disclosures), recruitment materials, and any survey data. To that end, I had to design the flyers that we intended to post and draft all of the communication we were going to use to get people to sign up for the study, on top of the actual documents that go into the study itself.

Piloting

Everything that matters needs refining—a data collection instrument is no exception. I spent about a month designing the instrument (coming up with content that covered all the things we wanted to test) and testing in on my own. I probably spent another month just testing it with other people, which turned out to be hugely important. I only have maybe a handful of people go through the protocol, but each time I learned a lot and changed a lot. When I started out, I was having participants write things on index cards; that wasn’t effective. A lot of the tweaks were in the language: things that were obvious to me were not apparent at all to the participants, some of the instructions were not nearly specific enough, and some were too specific. Having other people see and use the instrument revealed a lot of things I never would have thought of and in the end that made the instrument stronger.

Answering a research question (recovery/discovery)

At some point during the piloting and editing phase, I met with Andrew and Misti and go over the protocol (the portion of the instrument that participants interact with). Andrew was concerned about the type of search tasks that we were using: in his experience, discovery searches tended to yield better results than recovery. In the grand scheme of the research process, the difference probable negligible–either you find the thing you need, or you don’t. But they are two fundamentally different (and opposed) types of searches.

A recovery search, also called a known-item search, is pretty simple: “find a book called Pride and Prejudice.” You know an item exists, you just have to go into the database and recover it using some detail(s) about it (title, author, publication).

Discovery searches deal with evaluating resources against more abstract needs. This is where you go into the search bar with a general topic, search based on some related keywords, and discover potentially relevant resources. There is no predetermined endpoint.

This was an important consideration for us as a team because it lays at the heart of the study. Misti and I made the argument that (1) these searches are representative of the types of things music students search for and (2) the recovery aspects are actually more complex than than a typical recovery search (i.e. formats, editions, languages, and so on).

Instrumentation

Research is about gathering or recording evidence to support conclusions. Evidence is made up of data. Sometimes data exists out in the ether and you just have to go and grab it; sometimes you have find a way to create and collect it. In the case of creating, you need to know what data you want and how to create it.

In this study, we want data about how music students search for things, so we know that the method will involve having students search or asking them about searching. Specifically, we want to be able to draw conclusions about the obstacles that music students face in their search process, so the data needs to show those obstacles. In reality, most students—even music students—are not particularly aware of their search process, much less the obstacles (it’s very difficult to be aware of what you don’t know); because of this, asking them about it is not really an option. So we must set them on the course and discover the obstacles as they reach them; that is, we need to have them do the searching and watch what happens (or doesn’t!). I designed 6 search tasks, 2 each for articles, books, and scores. There is diversity in genre (vocal, instrumental, opera, chamber, theory) and language (German, French, Italian, English). The hope is that the diversity in the questions combines well with the diversity in the participants!