|
|
Background It has long been my conviction that students experience deeper learning when they pose their own questions. This comes from my own conversion experience, when I was finally asked, as a junior in college, to come up with my own question for a paper. While I found the process daunting, it forever changed my the way I looked at learning. But asking my own question I finally experienced genuine intellectual engagement. This project intially sought to test whether students could be effectively taught to ask the kind of questions that lead them to deeper inquiry and understanding. But as I talked with my Carnegie colleagues, I became aware that I wasn't even certain what made a question "good." So the focus of my task shifted, from one that centered on how I would teach students, to one that focused on trying to learn and define for myself what a good question was. More and more, though, it seemed important just to listen hard to the kinds of questions my students were asking: to see if I could use these questions as a window into their thought processes, motives, priorities, and assumptions about inquiry. And so that, ultimately, became the purpose of my project.
|
|
|
Methods and Data Collection: The Question Log The Course and the Students: The course was a basic composition course, required of all first-year students, taught in the fall term. The readings students generated questions from were argumentative essays, with 2 minor exceptions: one autobiographical essay and one short story, both of which have strongly implied arguments. There were 18 students in the class. The Data: My data wasthe questions students generated in response to readings. My primary way of collecting these questions was through an online question log. It worked like this: - I set up a website in which students could enter their questions after they'd read an assignment. After they'd entered all the questions which came easily to their minds, I asked them to generate one more question, which I called a "push" question. The database generated by this website kept track of the student name, the reading, the questions, and the type of question (regular or push).
- Students were to enter these questions by 8:00 p.m. on the night before the assignment was due. After 8:00 p.m., they went to another website which listed all the questions the students had generated, but without names attached. I then asked them to choose, before they came to class, the three questions on the list they found most important or valuable to them, and to explain, briefly, why they found them valuable.
- Once in class, students broke up into small groups (randomly assigned, and different each time) and discussed the questions. Then we came together as a class and used the questions to guide our discussion of the text. Any new questions that came up I attempted to record on the computer (projected on screen, so that students could see them).
- Finally, I asked students to fill out report sheets about which questions they had found most useful in group discussion, in class discussion, and the ones they were still thinking about as the class ended.
The impact of my teaching on the data: I never explicitly taught my students any questioning techniques, or gave them any question stems to use, or talked about some kinds of questions as being better than other kinds. I wanted to get a baseline; I wanted to see where students were, not where I wanted them to be. Nevertheless, as a teacher I undoubtedly had an effect. I would often direct the order in which the class addressed the questions; further I would sometimes ask my own questions (which I tried to indicate on the new questions list), and when students asked new questions I surely had some role in translating them into a form I thought was most useful ("Oh, so what youre really trying to ask is . . ."). Unfortunately, I have no specific recordes of my interventions, as they happened in class, on the fly.
|
|
|
|
Analysis and Results: What I Learned about Student Questioners I examined about 500 questions from 7 different readings. I developed the following categories to classify them, based on whatI interpreted to be the motive behind each question: Basic Comprehension, Interpretive Comprehension, Elaboration, Challenging/Testing, Implication, Loose Implication, Significance, Integration, Self-Reflection, and Non-Interrogative. After cataloguing each question, I determined the frequency with which each type occurred. Here are the results: - Students were most likely to ask elaboration questions (27.6%)--that is, questions that ask for information (usually factual) that the text does not provide. Though these questions were strictly informational, I could infer that many of them may have had critical intent behind them (e.g., "When was this article written" could be trying to determine whether an arguments evidence is out-of-date).
- After Elaboration questions, students were most likely to ask Basic Comprehension (22.0%), Non-Interrogative (15.4%), and Interpretive Comprehension (14.7%) questions. The Non-Interrogative questions were often critical in nature: they tended to identify a problem with the text (they are non-interrogative, however, because they don't genuinely inquire so much as express or imply an opinion, e.g. "where does this guy get off criticizing other peoples heroes?").
- Students are especially unlikely to ask Integrative (2.8%), Significance (1.4%), or Self-Reflection (0.35%) questions.
- Loose implication questions (5.6%) outnumber implication questions (5.2%).
Question Categories and Criteria
|
|
|
A Model of Student Questioning The prevalence of Elaboration, Basic Comprehension, Interpretive Comprehension, and Non-Interrogative Questions suggests that students see questions as useful for three main things: - to seek information (gather facts)
- to resolve confusion and misunderstanding
- and to argue with a text they disagree with (most non-interrogative questions were essentially critical in intent--they were assertions masquerading as questions, e.g. "What does some old guy know about the lives of college students, anyway?").
More evidence of this model came from student comments and reactions. One student, after I explained the question log, said, "But what if I understand the reading?" The implication here is that if you understand the basics of something, if you aren't confused, then there's no reason to ask a question. The idea of questions that don't clear up confusion but promote deeper understanding and further intellectual inquiry seems not to have occurred to many first-year college students. (Or else, perhaps, such questions seem more trouble than theyre worth). Students also had an unusually difficult time generating questions for an autobiographical essay that was easy to comprehend. When I asked them why they had trouble, they explained that they understood the essay, and that they couldn't argue with someone else's experience--again suggesting that comprehension and criticism are their main modes of questioning. When students do venture into slightly deeper territory with their inquiries--asking questions about the implications of a text, for example--they go a bit overboard. To some of these students, getting to deeper meaning means asking huge philosophical questions that bear only the most tenuous relation to the text itself ("Is duty to oneself more important than duty to one's community?"). In short, students can do details (small, factual questions), and they can do huge philosophical generalizations; they have more trouble asking questions that stick to the text but also go usefully beyond it.
|
|
|
|
Implications for Teaching: Questioning as a Process I suppose there is a kind of implied hierarchy in the categories I've developed for questions. Elaboration and Basic Comprehension questions tend to seem lower on the intellectual scale than Implication or Integration questions. But what I learned most coding and classifying these questions is that its possible to ask very good or very bad questions in almost any category. There are some extremely simple comprehension questions that can rapidly lead to complex areas of inquiry. And there are plenty of questions involving seemingly higher order skills that are pretty close to wretched. So no simple hierarchy will work here--or at least, it can only be a part of the story. What taught me most, finally, was not the act of adding up the numbers, but the act of trying to classify each question. Sometimes my difficulty in understanding or categorizing a question helped me realize what I thought I better question would do, or be, instead. I also kept a list of the questions that I thought were especially good, and tried to figure out what they had in common. My (admittedly idiosyncratic and partial) list of these results is available through the link below. More important to me than these specifics, however, is what I learned from struggling so hard to find the motive behind each question. As I said, the process was highly interpretive. In trying to determine the motive behind each question--particularly the Elaboration questions--I found myself asking, over and over, "Why does the student want to know this?" It's similar to the kind of question I ask when I write comments on student papers, in which they're giving me lots of information, but none of it in the service of an argument. I write in the margins, "Why are you telling me this?" It was then the analogy between writing-as-process and questioning-as-process occurred to me. We assume when students write first drafts they don't really know what they want to say; we assume they need to revise, clarify, re-think. Couldn't the same be true for questions--that on first attempts students may not yet know the real questions they want to ask, and that developing good questions is a process, one that would be forwarded considerably if students asked themselves, after each question they asked, "Why do I want to know this? This would lead, almost inevitably, to a chaining process. Students often tend to see questions are largely isolated, one-time events; what if we taught them to use one question to build another, or begin an internal dialogue? What if we taught them to interrogate the assumptions behind their questions they ask as rigorously as we ask them to interrogate the assumptions behind the arguments they make? And is it possible that the real hallmark of beginning questioners is not that they ask lower order questions, but that they stop too early in the process of questioning?
Tendencies of Experienced and Inexperienced Questioners
|
|
|
Questions for Further Investigation This ethnography of student questioning is mostly descriptive. Where might I go from here to find ways to actively improve student learning? Here are some questions I'm now imagining: - How can we really enlarge the student's model of inquiry from one that is pragmatic, information- or comprehension-seeking to one that creates a problem (in Gerald Graff's terms) for the challenges and pleasures of intellectual inquiry?
- How can we teach students to ask more penetrating questions about readings they can comprehend on a basic level? Would asking students to begin with observations and comments instead of questions help them to generate useful questions?
- Does simply asking lots of questions make students more effective questioners?
- Does exposing students regularly to questions generated by the whole class expand their repertoire and range of questions?
- Does having students regularly analyze their own questions make them more effective questioners?
- Does learning how to ask good questions of reading translate at all into helping students formulate better questions for paper topics?
|
|
|