This week’s activity was among the most useful in this course so far, in my opinion. While we’ve been examining and discussing the concept of Appreciative Inquiry and its various elements at length–including reading about case studies that use an AI approach–this week seemed to tie together well all the individual ideas and applications.
I chose to dissect case study #16, which dealt with how AI methods were applied to an evaluation of Chile’s health quality assurance program. The fact that Appreciative Inquiry was not only employed for the evaluation in this case study, but that it wound up being adopted by participants for future studies is testament, I think, to the effectiveness of AI.
And the whole idea of evaluation capacity building makes sense to me, in that it’s not just about applying AI methods to a particular evaluation in the hopes of achieving better results and honing evaluation methods–as valuable as those ideas are in and of themselves–but it’s about shaping attitudes and encouraging future behaviors. It’s about adopting Appreciative Inquiry not just within the confines of a particular evaluation, but establishing a context for an overall AI philosophy within an organization that impacts other facets of the organization.
As far as the status of my project–where I am and what I have yet to do–I think I have a decent overall structure in place, with some of the broad strokes painted–I just need to get out the finer brushes and paint a more detailed picture.
I found this week’s survey development activity to be quite interesting. At first glance, it seemed that creating survey questions would be easier than interview questions, but then I began delving deeper into the process with the assistance of my dyad partner, Lorikay. Through the entire process, I developed a real appreciation for the skill of precisely wording survey questions to not only help ensure getting the needed data, but also to frame the survey in such a way that the user is in the right mindset and is compelled to provide the most truthful responses. Plus, there’s also the component of the order of questions–not just keeping them sequential for the sake of making sense, but also creating a cognitive flow to lead the user where you need them to go.
Even though I found the moodle chat useful, I think the “think aloud” session over the phone with Lorikay worked better in some ways. While the chat did allow for more deliberation over the wording of questions because of the visual representation, collaborating over the phone allowed for more immediate responses to feedback–there wasn’t the delay caused by typing. Still… I think the combination of both methods for the different components of the activity proved to be effective overall.
Even though it’s been difficult to see “down the road” how all the individual elements of our final projects fit into place, it is becoming clearer with each step. And I think it’ll be interesting to see everyone’s final product with all the pieces in place–especially while keeping the overall process in mind.
I found this week’s activity to be among the most valuable in this course so far. The free exchange of ideas and feedback from a fellow student is most helpful–largely due to the fact that we are all trying to wrap our respective heads around the concepts of evaluation as a whole, Appreciative Inquiry, and how everything we’re learning ties into our proposed project idea. I felt that we were all in a similar mindset, which allowed us to better understand each other’s process of creating an interview guide, and therefore offer more insightful suggestions or criticisms.
Initially, I thought that the moodle chat would be an unwieldy tool for this activity, but it turned out to be surprisingly easy–perhaps because I’ve become more accustomed to communicating via IM-type chatting, I’m not sure. I think doing this activity face-to-face would be interesting, as well–to see what results come about from a different interpersonal dynamic.
As I stated in my interview summary, generating these questions was more difficult that I expected. I was pretty confident in my key evaluation questions, but when it came to actually nailing down the interview questions, it took more time and thought than I would have guessed. And that’s a good thing–in my opinion; it forced me to look at my evaluation proposal from different angles and try to see it from the perspective if the different interviewees–which, in turn, forced me to be a little more creative and innovative in the way I phrased the questions.
Plus, it’s always helpful to hear someone else’s opinions about the progress and direction of your project. It’s easy to develop blinders when your own ideas and opinions are what drives your decisions. In short, it was nice to have a different set of eyes and ears contribute.
The key questions I chose for the second phase of my evaluation project are:
- How well do the different departments across the UIUC campus prepare their respective students for their post-graduate education and/or careers?
- Is there a disparity in the level of educational technologies provided across departments? And if so, how can that be remedied within budgetary constraints?
- Would a campus-wide standard for classroom technology be a practical and effective solution for ensuring students are adequately prepared technologically?
I selected these questions because they represent the core issues surrounding my proposed evaluation. These were the very questions that led to my idea for the evaluation–so it seemed natural to make these the key questions. And as such, they will serve to frame other “sub-questions” that will direct the interview subjects toward the answers sought by this evaluation. All the questions are subjective in nature, but I foresee the last question being the most problematic and complicated to answer, because it encompasses so much. My hope is that the answers to all of these questions will become at least more apparent as interviews are conducted and survey responses are evaluated.