Monthly Archives: April 2016

Teaching What to Learn and Learning How to Teach

thumbby Hap Aziz

In his article “The Top 5 Faculty Morale Killers” published in The Chronicle of Higher Education online (April 25th, 2016), Rob Jenkins discusses several of the ways in which middle managers at academic institutions might influence faculty members’ experiences, for good or bad. Considering full-time faculty (rather than adjuncts), he discusses topics of micromanagement, trust issues, hogging the spotlight, the blame game, and blatant careerism; and for the most part, I find myself in agreement with his management observations and commentary. However, there is one area on which Jenkins touches that is problematic and often a subject of (sometimes heated) discussion at many of the institutions I’ve encountered over the past couple of decades. Under the heading of “micromanagement,” Jenkins writes,

“If, as an academic middle manager, you wish to destroy morale in your department, you can start by dictating to your faculty members exactly what to teach, how to teach it, which materials to use, and how to evaluate students.”

In this sentence, Jenkins links four related yet separate points, which he earlier categorized as being issues of academic freedom. I don’t believe the blanket application of the concept of academic freedom applies equally to all of these points, specifically as a protection against the potential administrative requirement to meet a certain standard of professional competency regarding learning outcomes. This discussion has only broadened as faculty and students both have become more involved with online and technology-mediated learning models, and some of those online learning concerns and considerations may be instructive in this context. Let’s examine Jenkins’ statement point by point.

  • what to teach

When it comes to making decisions regarding the subject matter being taught, there has been little disagreement with the idea that the full-time faculty member is the ultimate decision-making authority; that is, within generally accepted content parameters established largely through professional consensus, and as agreed upon by academic departments as to what content should be covered within courses. There are some dissenting viewpoints, often related to more politicized or controversial content as highlighted in this Huffington Post article. However, there is not enough cause to argue this point with Jenkins, and I see little downside in letting the subject matter expert (especially in contrast with the opposite approach) determine the subject matter being taught.

  • which materials to use

As with the point of what to teach, the selection of materials may largely be left to the faculty member. Certain decisions regarding text-book adoption, inclusion of supplementary materials, etc. may be subject to moderation by the appropriate academic department, but even so, the departments themselves include the teaching faculty. The remaining two points are where the conversation may be considered contentious.

  • how to teach it

When online courses and programs began to gain traction and popularity as an option for students in the late 1990s and early 2000s,  student outcomes lagged comparatively for the online alternatives. Eventually, it became obvious to institutions that basic faculty teaching and technology skills were not enough to replicate the on-ground classroom experience. In the 2004 study, “Online, On-Ground: What’s the Difference,” Ury and Ury found that “the online  student mean grade (80%) what significantly lower than the mean grade of the students enrolled in traditional sections of the same course (85%).” Drop-out rates continue to be problematic for online programs due to a number of variables, many of which are differentiators between online and on-ground instruction, as observed by Keith Tyler-Smith in his 2006 Journal of Online learning and Teaching article, “Early Attrition among First Time eLearners: A Review of Factors that Contribute to Drop-out, Withdrawal and Non-completion Rates of Adult Learners undertaking eLearning Programmes.”

The preponderance of research has demonstrated that building a successful online course is not simply a matter of selecting the appropriate content (or translating and transferring content from an on-ground format to an online format–whatever that might be). As the pressure for accountability grew (for a number of reasons), the notion also grew that faculty, by virtue of their subject matter expertise were not also necessarily well-qualified to develop effective online courses. Interestingly, this was by no means a new assessment or understanding. The instructional design community has understood this for quite some time, but without the mechanism for providing a comparative illustration–which online courses provided–faculty design of courses and how to teach them–was standard practice.

It does not necessarily follow that having subject matter expertise means that faculty also have teaching methods expertise. This is true for online courses, certainly, but it is also true for on-ground courses. Institutions serious about service to their learning populations must decide how they will equip their faculty for success, whether that is through ongoing professional development, the provision of support resources such as instructional design staff, or any combination of methods. But that will mean some form of “micromanagement” as institutions get a handle on assessing the performance of their academic programs and measuring the success of their students.

I remember reading an interview with Isaac Asimov in which he talked about his writing. In his life, he authored over 500 books along with countless essays, short stories, and articles. He was asked how he did what he did, and what advice he might give to aspiring authors. With perhaps uncharacteristic humility, Asimov admitted that as much as he wrote, he really had no idea how to explain how to do it. Writing was something he did prolifically, yet that did not qualify him to teach writing to others. Not coincidentally, he also expressed that he would make a poor editor, which brings me to the final point.

  • how to evaluate students

In the past decade, institutions have become quite serious about measuring student success, expending significant resources to determine what is affecting student engagement, retention, and persistence. The Spellings report (2006) emphasized accountability as one of the four key areas requiring attention in U. S. higher education. There are now, at many institutions, a variety of data-mining tools that allow academic leadership as well as faculty to assess student performance across a wide range of metrics. While a faculty member may be the best person to determine the quality of a student essay based on an articulated mastery of the content area, there are a host of other reporting metrics that address student performance issues and success that are not directly related to content mastery. Today’s reality is that student evaluation is most effective as a collaborative activity in which faculty play a key but partial role along with others in the institution.

So, yes, Rob Jenkins has identified several potential morale killers that institutional management might inflict upon teaching faculty. But to no small degree, some of what Jenkins identifies as morale killers is what I’d identify as entrenched attitudes that will lead to pain if they are not willingly let go. Of course I’m not saying that all faculty are in this situation, and I’m not even saying that there are no faculty at all that are able to teach well or effectively evaluate student performance. However, these two points are tied to an older way of thinking of the teaching and learning enterprise, in which the faculty member is the sole connection point to the student learning experience. With all the tools and resources available to faculty members in the technology-mediated classroom environment, it’s that older way of thinking that’s the true morale killer.

3 Comments

Filed under accountability, education, education technology, face-to-face instruction, faculty, higher education, instructional design, online education, teaching

The Quality of Learning

thumbby Hap Aziz

I find that in the never-ceasing stream-of-consciousness that represents my current and evolving thoughts on technology-enabled education, the theme of quality is a constant. In all sectors of the education enterprise, there seems to be a consensus that quality (whatever that might represent) must be an essential component of learning content and experiential process. Even before I thought to quantify the characteristics of quality in education, I had a strong sense that there were indeed characteristics to be measured. But as Hamlet might say, “aye, there’s the rub!” The challenge is in determining what those characteristics are before we can begin to consider how to measure them.

Which brings me to my second Shakespearean reference in as many paragraphs. In Act IV, Scene 1 of his play The Merchant of Venice, Shakespeare wrote the following lines:

“The quality of mercy is not strained;
It droppeth as the gentle rain from heaven
Upon the place beneath. It is twice blest;
It blesseth him that gives and him that takes”

Yes, there are two definitions to the term quality. The first, which I used in the context of learning, is the idea that quality is a measure of how good or bad something is. The second definition as used in The Merchant of Venice is that quality is an attribute of something, and in this case, Bill is describing an attribute of mercy. Reread the passage above, substituting the one instance of the word “mercy” with the word “learning.” Now consider the line “it blesseth him that gives and him that takes.” It doesn’t take a great shift in mental perspective to think of quality not as a measure of the learning experience, but rather as an intrinsic attribute that blesses both the teacher and student alike. All we need to do is optimize conditions for this attribute to be revealed.

There are several components to learning that may function as a blessing–or as a curse if poorly executed, and the following are just a few:

  • The facilitation of the relationship between teacher and student
  • The manner in which content is organized and made available
  • The kind of support provided to teacher or student when technical difficulties arise
  • The ability to leverage additional tools that may enhance the learning experience

What level of resources or commitment of effort does it take to optimize these conditions in any particular learning environment? I probably needn’t point out that there has been much relevant research performed. But it is important to remember that we can lose sight of the big picture when we dive into the weeds of data, and that it is always a good idea to revisit key principles on a regular basis. Probably the biggest of the big picture views is the concept that the entire institution must be aligned from top to bottom and side to side on the core mission of learning. In fact, the institution should commit itself to the ideal of being learning-centered. (While I won’t explore the implications of terminology here, I will point out that there is a significant different between being learning centered as opposed to being learner or student centered. See the work of Terry O’Banion with the League for Innovation in the Community College.)

Quality as an attribute provides a basis for agreement on a common philosophy regarding the learning experience; “it blesseth him that gives and him that takes.” Once this is understood and adopted as a foundation construct, then we may begin to articulate the idea of quality as a measure of the learning experience. This is where we enter the world or metrics and assessments with the intent to execute an effective feedback and improvement cycle. Fortunately there are tools that may assist us in this process:

While these tools are extremely valuable on their own, I would never recommend adoption as an excuse to breathe a sigh of relief as though the quality question has been answered. These tools may be integrated in whole or in part into the overall governance and strategic planning process that subsequently drives day-to-day decisions regarding how learning activities are conducted. Human intelligence in the learning enterprise is still the prerequisite to data-driven decision making. Or at least it should be.

One of the reasons that it’s difficult to answer the “quality question” is that quality can be categorized in multiple ways, each with multiple considerations. The following diagram depicts a possible model.

Quality of Online (or Technology-mediated) Learning

diagram Copyright (c) 2016 by Hap Aziz

The four columns represent the categories in which we might assess quality attributes.

  • Framework – Here we consider the quality of technology infrastructure and support across an institution. How well equipped, for example, is the academic technology group in order to provide exemplary levels of service to the various end users?
  • Content – The quality of course design process has a direct impact on the actual materials and media that both educators and learners will interact with during the duration of a particular course. You might think of the difference between a well-curated academic journal and a tabloid pseudo-news publication.
  • Experience – When we think of the quality of faculty and student end-user experience, we need to consider both the end-to-end experience as a service as well as a product. What will students say after they have taken the course? The answer often comes back to how they felt about what they experienced throughout.
  • Design – Program design quality includes components of the three other quality measures, but it is also an overarching theme that spans an entire program of study rather than individual courses. This means that individual course quality measures “interact” in the learner’s mind–so a single poor experience might negatively impact the whole program experience.

The horizontal themes are representative of characteristics that are common across all the quality attributes.

  • Ethics involves topics from intellectual property policies and considerations to online harassment and bullying.
  • Resources addresses the way in which institutions provision their online operations, hopefully positioning themselves for success.
  • Constituents is all about audience: who is participating, and what is important to them.
  • Measurement is the ever-present need to understand how well we are executing to our goals at every level of the institution from leadership to department to individual instructor.

It’s at the intersection of each column and row that we might explore some questions regarding quality, such as what the ethical issues around the use of particular course content might be, or how we might go about measuring the user experience. Some of the questions might point to best practices that could be applied to most institutions under most circumstances, while others might be very specific to individual institutions, programs, or courses. I’ll be facilitating this discussion, in fact, at the Online Learning Consortium Collaborate regional conference in Las Vegas on June 10th this year, and the result should be a list of questions and considerations around those points of intersection in the diagram. I’ll follow up with a subsequent blog entry, so watch this space!

1 Comment

Filed under effective practices, Hap Aziz, higher education, instructional design, learning, online education, Online Learning Consortium, online quality, standards, technology