Earlier this month Phil asked a question on Twitter about the growing usage of (and pushback against) faculty training based on the Quality Matters Course Design Rubric. That question led to a rich discussion – both pro and con – on the usage of the QM rubric in the attempt to improve online teaching in Fall 2020. The QM staff requested that we help with an alternate forum for them to address some of the issues raised online.

This is the third in a special series of podcast episodes on an important topic as we try to migrate from emergency remote teaching to purposely-designed quality online education. Link to Jesse’s blog post on the subject.

  • 15A: Introduction of topic
  • 15B: Interview with Bethany Simunich and Brendy Boyd from Quality Matters
  • 15C: Interviews with Stephanie Moore and Jesse Stommel


Phil: Welcome to COVID Transitions, where we discuss the transition that higher education has gone through and is going through due to the Covid-19 pandemic. I’m Phil Hill, and in this episode, I interview Stephanie Moore and Jesse Stommel to get a deeper discussion on the critical perspective of how Quality Matters and its course design rubric are being used in schools, particularly the spring and summer.

I’m here with Stephanie Moore, recently of the University of Virginia, but on her way to a new post with the University of New Mexico. A collaborator: I got the chance to co-write an article with Stephanie early on about the Covid transition. So it’s great to meet with you in person. Actually, I think this is the most live that we’ve met before, so it’s good to virtually meet, Stephanie.

Stephanie: That’s right. Good to meet you, too.

Phil: So were you surprised to see how much commentary came out, and [00:01:00] what was your impression of it?

Stephanie: Yeah, I have to admit, I was, too. I mean, in some ways, I guess I should say yes and no. You know, I know how faculty feel about quality matters, and it’s really a mixed bag. I think most of the folks who I know, who I would describe as seasoned educators who have a very clear sense of what they like to do in their classroom, they know themselves as educators. They know what they want to do. Those tend to be the folks who are more frustrated with it and feel like it binds them more than it supports them. Whereas there are faculty, especially those who are very new to online and typically those who are really new instructors like newly hired teachers, they tend to like Quality Matters more, in part because they feel like it gives them ideas and scaffolding and tools that they’re [00:02:00] just not familiar with anymore. So I think you get a mix of reactions that hinges largely on people’s experience and their comfort level with instruction and especially their comfort level and experience with teaching online as well.

Phil: And as usual, as with your writing, you just packed a lot into that space. I wouldn’t mind unpacking a little bit. First of all: what is it? “Frustrated with it”, and what does that mean in terms of Quality Matters? Within Quality Matters, the rubric versus all of the services of Quality Matters, but then also Quality Matters versus how institutions are applying it. So can you get that down a little bit?

Stephanie: I think the best way to maybe tackle it is to talk about how we had a conversation about it occurring when I was there in the Curry School of Education. When we sat down, and this was pre-Covid [00:03:00] when we sat down, to have a conversation about what does quality or effective online learning really mean for us. Our faculty asked me to lead a half day workshop on this, where I went through: Well, here’s what the research has to say. And we looked at Quality Matters, OLC, some other examples out there. And I had developed one in-house as well, because a lot of times it’s these are proprietary – you have to pay to use them. And I had been asked in a previous setting to develop something so that we didn’t have to pay for that. And it’s all anchored in the same research-based principles and things like that. And it was interesting watching the faculty react to all of these different pieces. They really like the in-house model that I had developed, but they felt like that was more of an articulation, of a vision of what is it that we really [00:04:00] want online learning to mean, and to be for the Curry School of Education. And they liked the idea of using that as an anchor for annual evaluations as well, that they felt like the way that that was articulated was more aspirational and visionary. But yet when it came down to the nuts and bolts about, ‘OK, so I’ve got to sit down and actually build my course, how do I get that done?’ They liked various aspects of Quality Matters and OLC, in order to scaffold them, to help them get that done, but only insofar as it fit into that vision.

And I think that captures the tension that we see and feel really nicely, that a lot of people feel like, you know, the what Quality Matters or OLC or others help with is the nuts and bolts, or the micro level details of how to build a course. A lot of institutions [00:05:00] end up using the Quality Matters rubric as the definition of excellence, as the vision for online learning, and I think that’s where that tension point comes on. So and I think that’s what you’re seeing in the in the conversations that were happening on Twitter, too, is a lot of people were saying, the way in which my institution uses, it actually binds my ability to design quality online learning, and the way that I really want to go about that. And I know Quality Matters, they don’t intend it to be that way. It’s not really designed to be that way. And yet that’s that’s how it’s getting used by administrations. And so in that regard, it’s like any other tool in that when administration or leadership starts to use a tool in a certain way, associations get made [00:06:00] with that tool.

And and I do feel I really feel for Quality Matters because I feel like they’ve got like the rubric itself is nicely anchored in research based principles. But they struggle with a message that there’s a singular way to design an effective online course. And those of us who have been teaching know that whether it’s online or face to face are blended, there’s no singular way. There’s no singular type of course, there’s no particular way to go about this. And I know based on their conversations, or conversations with them, that that’s not what they intend, and that they really struggle with that message. But yet that’s the impression that a lot of faculty have, is that ‘this is a particular type of online course, and maybe Quality Matters is great for that, but I really want to build a very different kind of course.’ [00:07:00]

And frankly, we see the challenge comes in with courses when you’re working with adult learners, and you want to give your learners a lot more autonomy around how the course gets structured, setting class expectations, choosing what you’re going to cover in class, structuring and sequencing that. So in graduate school in particular, which is where a lot of online learning takes place, faculty express real tension between their educational philosophies or epistemology and what they feel the Quality Matters rubric is suggesting they do instructionally.

Phil: I’m hearing two things, and they’re not contradictory, but I want to make sure I’m hearing both. On one hand, a large amount of the pushback comes from the way administrators use Quality Matters and use it as almost a QA process and [00:08:00] not holistically. But at the same time, the Quality Matters rubric does lend itself to a particular course design, set of assumptions. So it does contribute to the design, this way type of mentality. So am I hearing that both of those are happening?

Stephanie: I think that’s that nicely summarizes both my own experience and what I felt I was hearing in the conversations on Twitter as well. There was one colleague who posted how Quality Matters tends to suggest a very constructivist approach. And honestly, that’s not a critique we hear just about Quality Matters, but about instructional design broadly. That instructional design tends to privilege the instructors’ decisions about how to structure everything rather [00:09:00] than being a more participatory process. And so it’s not terribly surprising to me to hear that. But I do think that the field of instructional design broadly has for some time been in process of pivoting away from that instructivist sort of perspective. I think the other challenge here, percolating in all of this, is that there’s a there’s a class issue in innovations, and diffusion of innovations, that I think is going on here, too. And that said, the developers are the creators of an innovation, invent it one way and envision it being used one way, and then it gets put out into society or whatever context, and it gets used in a very different way from how the designer or developer intended it to be used. And we see that gap between designer or developer [00:10:00] intent and actual implementation all the time.

And and I do think that Quality Matters, just like any other sort of innovative or entrepreneurial entity has to really think about, ‘OK, how much ownership are we going to take over the implementation that’s going on and and whether or not that maps to what our vision was and how that’s affecting the perceptions and the branding of our product that we’ve created’ versus how much they they don’t want to want to get involved in that. So I think they’ve got a very interesting conversation internally to have right now. You know, I’m not sure I have specific suggestions, but I have a few thoughts on how that gets managed. But, I do think that I would like to see them be reflective [00:11:00] about this feedback that they’re getting and really think about how can we be different partners? Can we be better partners not just with the administration’s or the institutions, but with the faculty who are really the ones where the rubber meets the road? And what’s happening at that point of contact is not always a very happy experience.

Phil: So even if it’s not something, that you said, that they intended.

Stephanie: Exactly.

Phil: Then taking a role in hearing where the frustration is and seeing how they might be able to influence the implementation, as opposed to just looking back, saying, ‘well, that’s not what we said to do.’

Stephanie: Yeah.

Phil: This is taking a little bit different way. But although QM, they’re much bigger than just the rubric, but there is a centrality around the course design process as opposed to getting into the course facilitation and teaching.

Stephanie: Right.

Phil: And the very name of Quality Matters. [00:12:00] And it has this implication of quality comes from course design. So when you were at Curry or just in general, what’s your view of what role should QM play outside of course design, or what role do you see them playing more into the teaching and facilitation?

Stephanie: Boy, that’s a great question, because, when we sat down, and we were actually cutting the rubric and different things apart, and reorganizing and moving it around, which is fascinating to watch, right, how faculty were thinking about it, interpreting these different pieces? And so, what we ended up with was the the Quality Matters pieces that faculty wanted to retain really did end up in the course development phase. There wasn’t [00:13:00] anything that they felt was about the course implementation phase of things. Now, there are some pieces in there that I think actually are, like timeliness of responsiveness on the part of faculty, things like that, that all come at the point of implementation.

I think even that you get into some tricky spaces where if Quality Matters were to decide, you know what, let’s flush out the implementation phase of a course and provide some guidance around that, having written guidance myself around this, it’s very easy to start to map out guidance that can make every course look like a cookie cutter. And so I think whether they do that, or just focus on the development piece alone, I really think they need to think about how do we communicate diversity of opportunities, or diversity [00:14:00] of design ideas, to instructors to get to put more of a focus on imagination, or just simply a range of different approaches as opposed to a good course. Is this a good course? Broadly, those of us who know the research would say there’s some broad principles that certainly crop up for an effective online course versus an ineffective course. But those principles don’t start to drive a particular sort of design, whereas when you start getting into the nuts and bolts about, make sure you do this and make sure you do that, that that’s when you start driving things in a particular direction for courses. I’m not sure I answered your question, Phil..

Phil: It’s a conversation, right? It’s not a Q&A. So that is helpful.

Stephanie: And and that all evolves as a result of a very social process [00:15:00] where, you know, innovations aren’t the they’re not owned by the developer, by the innovator. You know, the people adopting it have a lot to say back. Yes. And a lot of input back into that. And so I think once you understand that, it just it just sort of is how things develop.

And so I think for an entity like QM, the best thing that they could do would really be to look at something like that and say, OK, this is how it goes. And rather than trying to fight the social process, what if we actually adopt that as our way of doing things, and we’re collaborative and iterative along with the very people that we’re trying to work with? And none of that is to suggest that QM is not thinking that way or anything. I think what I heard in response from Brenda and Bethany, ‘we’re very positive, very engaged.’ And I think that’s a healthy way to go on the dialogue. [00:16:00]

So with faculty, we are used to having shared governance. And so we’re not simply answering to administration. We want to have a say in how things go because we feel like it’s it’s part of how universities are structured. Some universities will push on this more than others. But as far as faculty, we believe in having a shared say in the vision for what what it is that we are trying to do, and how we are trying to move that forward. If you’re communicating to that group, to faculty, that you’re not listening, if administrators feel like, ‘fine, whatever, this free tool,’ faculty feel like you’re not listening, you’re actually missing half of the governance structure of universities. And if you take a defensive posture to [00:17:00] that, you really do risk excluding a very influential voice and the decisions that get made in institutions of higher education.

Phil: Sure. And I would add to that just adding in my own view, you’re also likely to trigger even more extreme reactions. So it’s not just missing out, but it’s a difficult area.

Stephanie: Yeah, the analogies that I hear a lot of faculty use to QM are not flattering at all. And so if they’re already frustrated with the tool itself and the way in which it’s being implemented, and then they voice that and what they get back in return is defensiveness. And you’re all wrong at it. It’s simultaneously denying our experiences, which, as you can tell, our very shared experiences across institutions.

Phil: Yeah, I would say the mischaracterization risk goes both ways. Part of the risk is people [00:18:00] mischaracterizing what QM intentions are, what they provide. But at the same time, there’s a risk of them mischaracterizing the pushback they’re hearing.

Stephanie: I think that’s a good summary.

Phil: But I really appreciate your time taking on this and hopefully this in a different modality will be useful. So I appreciate your your help on this.

Stephanie: Thank you. Phil, good to talk.

Next up is my interview with Jesse Stommel.

But Jesse, welcome. And if you could give the listeners, you know, let them understand where you’re coming from.

Jesse: Great. Great to talk to you. So I’m Jesse Stommel. I have been teaching for a little over 20 years, and my research focuses in higher education pedagogy, and specifically critical digital pedagogy. I am the executive director of Hybrid Pedagogy and an associate director of Digital Pedagogy Lab, and [00:19:00] it’s great to join you. I’m looking forward to this conversation.

Phil: Well, thanks. Well, to jump into it, this is clearly a topic that you’ve been thinking about. In other words, you were not just reacting to a Twitter conversation. And as a matter of fact, you’ve written a blog post that I believe was associated with a with a talk that you were giving. But to jump into it, what started the whole conversation was that I was seeing a lot more pushback on the usage of Quality Matters in terms of schools that are trying to help their faculty move into more of a quality online approach. Has this been a topic that you’ve been following and looking out for a while, or is this a fairly new interest of yours?

Jesse: Conversations about Quality Matters are something that I’ve been a part of for over a decade, and in different ways, at different levels. I’ve been at institutions [00:20:00] that have adopted the Quality Matters rubric. I’ve given presentations where I’ve talked about and analyzed the Quality Matters rubric, and I’ve really worked on this from all different sides. I’ve been an instructional designer. I’ve been an online instructor. I’ve been a face to face instructor. I’ve been an administrator. And so I’ve really given a lot of thought to the Quality Matters rubric and how it’s used at institutions. I’ll just be really honest and straightforward and say that I’ve never been a big fan of the Quality Matters rubric. Doesn’t mean that I don’t think there are some really great people working at Quality Matters, and working to help faculty move online. And it also doesn’t mean that I don’t think that there are some wonderful faculty members and administrators who are using the Quality Matters rubric and effective ways. I’ll tell you that the germ of this conversation more recently for me was watching how institutions were employing and rolling out the Quality Matters rubric in response to the pandemic pivot [00:21:00] to online learning.

Yes. And ultimately, my concerns about Quality Matters got a lot greater in this moment because I just don’t feel like it is a good tool to help brand new online teachers respond to and engage in emergency remote learning. I feel like the tool, and we can get into this, but I feel like the tool is far too elaborate. And when I see faculty grappling with this tool in the midst of an emergent crisis, what I see is a lot of faculty feeling like they have no way forward. Like there’s no point of entry for them into this conversation about online learning and that the Quality Matters rubric frustrates that even further.

Phil: And just to provide some context, because I know it’s a complicated subject – on Twitter part of the thing was about the rubric, and then there was also the [00:22:00] issue of the broader professional development of Quality Matters – but then we get into how schools are choosing to use it, sort of quasi independent of Quality Matters. So could you describe sort of the context? Are you talking about the rubric itself or are you talking about how schools are applying it, which might not even be what Quality Matters intended? Or is it a combination?

Jesse: I think it’s a combination of both. If I look at the rubric itself as an instrument, we can have a whole conversation about the issues that I take with it. But I think that the bigger problem right now is the way that institutions are are using it. And ultimately, the biggest problem is that is that I see a lot of institutions using this in an obligatory way, where essentially what it is is not a tool to help people become better online teachers, but is a tool for quality assurance – that it is a mechanism that institutions [00:23:00] are using to certify that their online teaching is good, and they’re basically running their faculty. And these are faculty who have never even thought, in some cases never even thought about teaching online. And these faculty are being run through what is essentially a bureaucratic juggernaut for them, a process that feels utterly at odds with who they are as teachers and what they feel like their work in education is. And so ultimately, the problem is the making of rubric like this, a 42 or 43 point rubric, very hyper precise, making it obligatory and also not really dropping faculty into a process of conversation around it where they feel like they can consent and be full participants in the work that this rubric inspires.

Phil: And one question I would have is, if this gets to [00:24:00] what the intention is versus actual usage, is the usage of is the rubric from your perspective being used as a design guide or as an opportunity to evaluate course designs after the fact? And specifically the actual usage today in the transition from emergency room teaching to online teaching?

Jesse: I’ve seen the rubric over many years. I’ve seen it used in all manner of ways. I mean, essentially I’ve seen it used as an after the fact evaluation of effective online teaching, as though we can do that neat and tidily. And I think that that’s one of the issues that I take. And I’ve also seen it used as a starting point, a place to inspire new or existing teachers to think about their online learning in different ways.

I think that the the problem comes when I see a rubric like this being weaponized [00:25:00] by administrations. And I don’t use that term lightly. I mean, I use it very thoughtfully because students are feeling precarious right now, but teachers and faculty members are also feeling precarious. And when a tool like this is used as a quality assurance mechanism, it ends up feeling like it is an instrument of an administration looking to control teachers.

That’s at least the way that teachers often feel about it.

Phil: I guess part of the thing that’s interesting to me is so we’re so much talking about the usage by school administrators and particular at this point in time, even though the issues have gone beyond this point of time. But does the rubric self encourage this [00:26:00] usage, or is this something, the overusage, the weaponization, is that something that’s completely separate from the rubric – it doesn’t need to be this way? How does one lead to the other?

Jesse: That that leads me to a question that I’m often having with folks. The idea of is a tool neutral to its use. And I think absolutely, yes. I think tools have pedagogies baked into them. Tools teach us how to use them. And so I think rubrics in general – I mean, we could have a whole other conversation about rubrics in general – but if I think about this rubric in particular, the way and this is a lot of what my blog post in my keynote were about, and not just targeting Quality Matters, but targeting a way of thinking about education.

And ultimately, I think the issue is that when you take a 42 or 43 point rubric and that what it is, is when someone looks at this, even when they look at it visually without actually reading the words on the page, it feels immediately inscrutable. It [00:27:00] feels like the mechanism or the instrument has some sort of inherent wisdom that the person looking at it couldn’t possibly yet grasp. And when they look at it, they feel overwhelmed by it. A lot of people talk about the goal of rubrics being to simplify or to make transparent what is otherwise tacit or unspoken. But I think that this one does something different. I think when you see it, you feel overwhelmed. I was going to say you feel terror. Honestly, I think that there are some faculty who feel terror when they see it. And so I think using that word is I’m not exaggerating. I know faculty who when they see this, they feel terror. They feel this isn’t something I could possibly grasp or have entrance into. And so if they look at the tool or instrument or mechanism, and they feel this is something I have no power over, this is something that only has power over me. There is no point of entry for [00:28:00] me into this conversation. It is just a large page filled with text, and no white space on the page for me to fill with my own thinking about what good online learning might be.

I think then what you have is you have a tool that is over architectured, and an instrument that is telegraphing its purpose and its function too overtly.

Phil: Now, I realize after asking you the question that was that was a softball question for you, because now you could go into the LMS. But are you aware, have you seen their Bridge? They have a Bridge tool as well. One of the things I’m hearing from Quality Matters is the rubric itself is not intended to be a tool to help faculty brand new to online teaching to be used, and [00:29:00] here’s a tool that can help you there. Have you seen any of the non-rubric tools that they offer?

Jesse: You know, I’ve seen quite a few of their tools. And over the years I’ve I mean, I’ve researched Quality Matters, and I’ve looked carefully at their marketing. I looked carefully at what they say they’re doing and what they’re actually doing. And I’ve approached this from the experience of someone who just – and it’s not just about their website, but I’m saying this somewhat metaphorically, someone who just lands upon their website, who lands upon their work, and then where do you travel from there? Ultimately, the point of entry for most people with Quality Matters is the rubric. 

So the idea that there’s something else beyond the rubric is it just isn’t how it works. In practice, people approach Quality Matters through the rubric. That’s their entry point. And so that’s the first page of the book. That’s the first page of the story that Quality Matters is telling. And one of the things that I think that happens with the way that Quality [00:30:00] Matters their entire oeuvre of materials, which is there’s just a wonderful wealth of stuff that they have. I think that the folks at Quality Matters are incredibly well intentioned, and they’re doing really good work to think about what online learning is and how it works. But I think what happens is you come at it through the rubric, you come at it from this place of fear, terror, feeling overwhelmed, feeling afraid that you don’t know what you’re doing.

And then there’s a way in which the Quality Matters ecosystem is going to be an answer for you. And they draw you deeper and deeper into this ecosystem. And there’s lots of stuff there, lots of useful, productive stuff. But it isn’t how I see Quality Matters being delivered, if you will, on the ground. Any time I’ve ever been to a workshop that is employing Quality Matters, the rubric is what is led with. And I think that that’s a story that Quality Matters [00:31:00] can rewrite as an organization. But I don’t feel like they’re there yet.

Phil: Now what we’re saying is it’s very much the how things are implemented in reality in actual workshops on campuses, and also to narrow down the conversation to what you said, it’s in particular for teachers or faculty who have not taught online in the year 2020. So within this context, one of the questions that comes up is if this rubric, and if the usage of Quality Matters is not working because it’s overwhelming for the reasons you’ve laid out. What are better approaches to help these new faculty or faculty who are new to the modality and thinking through these subjects, including accessibility? How do you ensure that courses are accessible? What is a different approach that is much more realistic [00:32:00] to what these faculty needs are?

Jesse: I think and I’ve said this in lots of different ways, in lots of different forums, but I think we need to start with hard conversations about what the purpose of education is, who our students are, who we are as teachers at our institution, who are our faculty, what strengths, what wisdom’s do they bring to the work? And we start from a place of figuring out not what are the two best practices that are going to work for everyone at every institution. But we start by having the hard conversation about what specific approaches do we need to employ at our institutions in our context, even in our disciplines, even in our specific classrooms.

And I think that that’s those conversations get frustrated when we employ approaches that rely on best practices, because really, when I think about best practices, what I want us to talk about is good sometimes in certain contexts with certain students and certain faculty member practices. [00:33:00] And we can only get to that place if we start with a conversation. So if I think about a rubric, I think that the best place to start with faculty, would be having faculty in a discipline, in a department, at an institution, writing their own rubric, having a blank space, an empty page to fill where they determine what’s important for their students at their institution.

Yeah, thank you so much.

Phil: Great. Well, listen, I appreciate your time on this, and certainly we’re going to refer to your blog post and keynote address for more rich description of some of these subjects.

But I appreciate your your time on this. Thank you.

Earlier this month Phil asked a question on Twitter about the growing usage of (and pushback against) faculty training based on the Quality Matters Course Design Rubric. That question led to a rich discussion – both pro and con – on the usage of the QM rubric in the attempt to improve online teaching in Fall 2020. The QM staff requested that we help with an alternate forum for them to address some of the issues raised online.

This is the second in a special series of podcast episodes on an important topic as we try to migrate from emergency remote teaching to purposely-designed quality online education.

  • 15A: Introduction of topic
  • 15B: Interview with Bethany Simunich and Brendy Boyd from Quality Matters
  • 15C: Interviews with Stephanie Moore and Jesse Stommel


Phil: Welcome to a special episode of COVID Transitions. I’m Phil Hill, and recently we had an interesting Twitter conversation, or as interesting as you can get on Twitter, that involved Quality Matters and the usage of the rubric and how schools are using it to try to transition from emergency remote learning to online education, to put back into things that we know how to do in online education and improve quality. And it was a fascinating conversation, but it was on Twitter, which is very limited. We have Brenda and Bethany from Quality Matters, and they’ve agreed to join us so we can have more of an extended conversation and get to these important topics. But give give people more chance to discuss things in depth. So I’m with Brenda Boyd, senior academic director of program services at Quality Matters, and Bethany Simunich, director of research and innovation [00:01:00] at Quality Matters. It’s great to have you both here. And Brenda, welcome to the show.

Brenda: Thanks, Phil. It’s great to be here. We’re glad that we have this opportunity to have this conversation and to talk a little bit about some of the things that we saw and some of the arguments that are being made and how we were thinking. Well, some of those things are misconceptions. And so we’d really appreciate the opportunity to clear some of those things up today.

Phil: Great. And also, Bethany, welcome, and glad to have you here, as well as our first – you and I were discussing – our first external interviews as part of this podcast, but welcome.

Bethany: Yes. Yes. Thank you so much for having us. And as I mentioned to you before we begin, it’s very hard to have good conversations on Twitter. So I appreciate you giving us this opportunity to hopefully start a better conversation and to really bring some some new ideas into Quality Matters, and the discussion surrounding them.

Phil: And [00:02:00] this will be interesting, we’re going multimodal, switching from Twitter and moving to podcast, then having a blog post attached to it to jump in. As I said, a lot of the trigger for this conversation came from Twitter. Basically I was asking the question, ‘I’m starting to see a lot of pushback and commentary on the usage of Quality Matters in terms of schools trying to do quality assurance or manage the transition from emergency remote teaching to online teaching.’ And we just got a plethora of responses back, both positive and negative. But it raised some subjects that we wanted to get deeper into. Were you guys surprised with the general feedback of the discussion and on the rubric and course redesign process? And there seemed to be a lot of emotions involved.

Bethany: Well, I think emotions are high right now in general, but my general feedback and the initial [00:03:00] reaction were actually very similar. On one hand, a little bit of a disappointment at the misconceptions that are presented as facts, but also tempered that with some excitement and even a little bit of hope for renewed conversations about improving how QM helps faculty meet their goals. So, I think that in some sense, this conversation about quality in online learning, so not even specific to QM, but quality and online learning, is coming at a time where you have an entirely new group of faculty that are moving to the online environment. So, we know that we had the remote shift in the pandemic and for a lot of faculty, they were spending time this summer trying to improve the quality of what they did in the spring and to build on those successes, because I think a lot of us anticipate that we’re going to be shifting back in the fall. You’ve already commented on that. So, I’m sure they already plan to be teaching online. So, if this was your first foray into online teaching and learning, that was a big [00:04:00] gap for you to fill, right?.

So, there was a lot of time spent in spring and summer about faculty development. And with that came this conversation about QM and quality, because a lot of institutions use QM. So, I think that QM, though, it is way too often distorted and just seen as the rubric. And in truth, Phil, that really becomes a straw man fallacy. It does surprise me that academics – because we are trained to inquire and to analyze and critique something only after we understand it – that there would be criticisms for something that obviously there’s for certain people only of a certain surface level understanding of, right? So I’d like to actually use that Twitter thread, though, as a way to open up broader conversations with the community about quality in online learning, because the rubric is just one of the tools that QM has. It’s one of the core tools in helping institutions set up an integrated and sustainable quality [00:05:00] assurance process. And that’s the point of QM to help institutions continually improve their online and digital education and their student focus learning goals. Brenda, what was your feedback?

Brenda: So, I would have to say that I was also surprised about the the pushback about QM on the Twitter threads that I saw, and I was also kind of heartbroken. I got to be honest, because Quality Matters is not a behemoth EdTech company. We have thirty nine people who work for our little nonprofit and are very passionate about what we do and very passionate about improving online learning. And so it was really, I would even say, hurtful some of the things that were said, because we’re all on the same side. We all want to help students and [00:06:00] we are at a point in time where a lot of people who never thought they would be teaching online have been thrust into that. And institutions are challenged to support faculty in doing that. I think what a lot of people may not understand is that QM is the connector. The rubric isn’t developed by Quality Matters staff, it’s developed by the community. So, we have rubric committees that are composed of faculty, who have experience with online teaching. We get input from our community. We analyze the data from the course reviews. Initially it was pretty interesting, because QM is faculty driven. Faculty are the peers doing the reviews. QM staff do not do course reviews. So, there were some real interesting assumptions that were made. We call them peer reviewers [00:07:00] because they are faculty peers who are there to help their colleagues improve through the continuous improvement process. That is, of course, reviews.

Phil: But I did see quite a few comments that acknowledged, or pointed out the role of the college or university. And a lot of what I saw was how much it gets used as a QA tool or a cudgel to use in the process. Let me take a step back just a little bit. Which parts of the conversation in your mind do you see as, ‘oh, that’s a legitimate subject that we need to deal with’ versus the mischaracterization. You’ve already mentioned one of the main mischaracterizations is equating QM with the rubric only.

Bethany: I think there were there were several legitimate points that were being made. One of them is how QM interfaces with an institutional implementation of quality assurance around online learning. So I think that that’s a legitimate [00:08:00] conversation, not from the standpoint of, ‘oh, that’s all the institution’s purview, and what happens there is just what the institution happens, and QM is separate from that.’ But in the sense of how can we have better conversations for how to involve the right people at the right time, get the right individuals at the table for things like quality assurance implementations? How can we have better conversations and use tools better so that this is not always this top down initiative? And so that quality is continuously focused on student success – as QM positions it – and not weaponized against faculty? So I think that it really brought to light that there are ways and needs for having these better conversations with those that do the implementation of this work. I think all too often faculty are left out of that conversation. Instructional designers may be left out of that conversation. [00:09:00] I think this is a legitimate conversation to have.

The other thing that I thought was a legitimate point that I’m actually really excited to have some conversations about. And you had mentioned that you have an upcoming podcast with Steph Moore, and we plan to talk with her as well. So she brought up some good points, as did some other faculty, for the flexibility surrounding the rubric. Does it always work for every design approach? I think that’s a legitimate and good conversation to have. And that also, though, dovetailed with some of the other misconceptions that came out. So in reading that thread, it became apparent to me that a lot of faculty are unaware that there’s lots of different types of reviews that you can do, including MyCR, (My Customer Reviews), so that you can customize the rubric to meet your own faculty, your institutional needs. So I kept trying to ask the question, well, what’s the goal? You know, so for faculty that maybe feel, my course [00:10:00] doesn’t work with QM, or they’re really unfamiliar with the ways that QM can support the work that they do. What’s the goal? Is the goal for continuous quality improvement within that course or for that faculty member and their students? Is it a larger institutional goal that’s tied perhaps to accreditation?

Phil: Sure. Do you mind if I just jump into a very specific point, because you raised that about the flexibility, because I saw this in several comments about the 42 elements. And do you have to go through every one of them? You mentioned flexibility of usage. Should people use these 42 elements as you have to go through every one or use them in the same way, or what’s the right way to think of that?

Brenda: Well, I think that, you know, they’re the QM higher education rubric does have 42 specific review standards. They are organized into eight general standard areas. So those general standard areas include things that no one would disagree with, like providing [00:11:00] support to students or telling students where to go and what to do first, and introductions and overviews. To your question, where do they begin, or how much flexibility is there? It goes back to what Bethany is saying about the goals, like what are they trying to do? Are you trying to get all of your courses certified or are you getting your program certified by QM? What is it that you’re trying to actually accomplish? If your goal is to just improve the quality of your courses and maybe you’re not ready to have everything certified by QM, maybe there’s a subset of standards or selected standards that you want to use. So, we have tools that enable the modification of the rubric. One is called My Custom Reviews, and it enables an institution to take the rubric, copy it into this tool, and then they can take out the standards that they don’t want to address. And they also have the ability to add their own things into this rubric. [00:12:00] The rubric is designed to be interrelated and holistic. So, there are standards that refer to other standards and the list of specific review standards that are on the website is not the entirety of the rubric. That just lists the specific review standards. They are intentionally succinct. There is a whole bunch of annotations behind them that members have the ability to use and integrate into their professional development to use as examples and best practice, et cetera.

Phil: If you don’t mind me jumping on it to me that the theme that I saw it was how it’s being used here in 2020, there was a common theme about schools using it as a QA device, saying, ‘OK, we’re going online, everybody’s got to meet these standards or here’s how we’re defining quality.’ That was the usage is the QA method to enforce [00:13:00] a transition from emergency remote without standards to quality online learning. Are you seeing that as a common theme? And what is your commentary about if that’s an appropriate usage?

Brenda: We’re not seeing institutions coming to us saying we want to certify this whole program before fall begins. Some institutions maybe saying these are a quality assurance metrics. This is what we’re moving toward. But we haven’t seen like a huge influx of new course reviews. And honestly, we would not recommend moving from emergency remote instruction to the full blown rubric, which is why we developed the Bridge to Quality, which is the online course design guide. And it’s on our website. It’s free. It’s open to everybody. And we’ve been working this summer, Bethany spearheaded the Emergency Remote instruction Checklist to help faculty transition through the pivot, and then our next step was how do we help them move- exactly your question – from [00:14:00] this emergency remote instruction environment toward the quality standards? We knew that there was going to need to be steps in between there, that it’s unrealistic to take the course design rubric that’s designed to review courses that have been taught a couple of times and have had the opportunity to work the kinks out, too. We wouldn’t expect to dump 42 standards into your lap and expect to be magically met at this point in time. It’s just not realistic. And we have seen a huge influx of professional development over this summer to help faculty move toward online. Even in our own Designing Your Online Course workshop, we don’t address 42 standards and we look at our Essential standards. We look at that backward design approach, looking at the alignment of the learning objectives. Are your assessments measuring your objectives? Because [00:15:00] there are a lot of faculty out there who may not have thought about these things before. And in the end, this moment gives them that opportunity to kind of focus on what are what are really the outcomes for my learners and how am I enabling them to get there.

Bethany: Yeah, and just to piggyback on that for a second, so I think regardless of the tools and the processes used, I don’t think there’s a one-size-fits-all approach here when we’re in this very difficult situation with the spring pivot and potentially a fall pivot. So I think most colleges and universities are facing a very big task of trying to do remote learning at a minimum level of quality, because we know at the very least that we, for our students, have to do a little better job than we did in the spring if we end up having to pivot in the fall term as well. And there’s lots of tools to help get there. But the rubric is a tool for evaluation, [00:16:00] and that’s not where most remote courses are. The big difference between remote courses and online courses are that online courses are those that are purposefully designed for the online learning environment, whereas remote courses are focused on recreating the Face-To-Face experience at a distance, and that those are two very different goals. So that’s also why I kept asking the question about what is your institutional goal? What is your faculty goal? So when the emergency room pivot happened, that’s when we created, as Brenda just mentioned, the Emergency Remote Instruction Checklist.

And that was designed to highlight what faculty pivoting to remote should concentrate on first and really how to connect with your students. How could they how they could improve the remote learning experience as the semester continued and as they had a little bit more time. But as Brenda mentioned, we’re a small staff. We worked nights and weekends to create that as a public free resource. And we knew then that there would there’s a gap between [00:17:00] that and where faculty institutions want it to be for the fall. And again, that’s why we created the Design Guide. And again, that’s a free public resource. But we also created that understanding that the rubric is not a design guide, it’s an evaluation tool. That goes and harkens back to the Twitter thread as well. There’s a big misperception that the rubric is a design guide rather than an evaluation tool. And I think that hits at the core of why this came out, and to your question, should institutions be using a rubric to get to where they want in terms of their their remote course quality?

Phil: But you mentioned the one size doesn’t fit all, and a lot of it gets to teachers who have never gotten into this before. And it raises the question, what is the sweet spot of what QM is designed for in terms of faculty experience?

Bethany: Yeah, that’s a good question. And I’m going to actually [00:18:00] let faculty speak for themselves on this one, because as far as the research person in QM, I do know that the data on the faculty that interact with us and their experience in our in our professional development. And because we regularly pull this data to see how faculty are responding and how they’re interacting. So, for some of our most popular workshops, so those on teaching online, designing your online course, our flagship workshop on applying the rubric, faculty have a high degree of satisfaction regardless of their experience level. So, for the design workshop, for example, those faculty that are newer to online design and teaching, they have a satisfaction rate of 95%. For those that have eight or more years of experience, so those that I would consider very experienced online instructors, their satisfaction is 93%. That’s one way to really say that the QM community is a home for everybody, because what those more experienced faculty [00:19:00] bring to the table is they also bring that knowledge to bear on peer reviews, for example, if they’re serving as a team chair and the mentoring that happens between themselves and the other peer reviewers. But there’s also opportunities for them to share within workshops their additional expertise, and also to help QM expand our own thinking with their innovative design and teaching approaches, with those faculty that are newer to online teaching. And as we just talked about, right now, we have a whole new section of faculty that really have never come to the table to talk about online teaching or to have a chance to really practice that at their institution. It’s a much larger community than it ever was. And I think really ready for even more the the benefits and the experience for those that have been doing this for years.

Brenda: I would agree, we see in our professional development faculty with no experience, faculty of 20 years’ experience, I just saw a tweet yesterday by a faculty member who was like, ‘I’m [00:20:00] teaching online for 20 years, and I took this professional development workshop and I feel like I’m really going to help my students this fall.’

So, I think that there’s a wide range that we have entry points for everyone. Our intention really is to have an open and collegial dialogue, let’s be honest, faculty members don’t necessarily get any professional development on how to teach online unless their institution provides a forum, or they went through an educational technology instructional design program, or they were lucky enough to have a teaching assistantship or a graduate assistantship that gave them some teaching and pedagogical training in their graduate programs. And I agree that there are a lot of different approaches to doing so. And we want to be clear that this is not about cookie cutter courses, because while there are 42 standards, there [00:21:00] are many ways to meet them.

We’re technology agnostic. So whatever tools you’re using or the tools you’re using, we, you know, we do look to see does this help support what you’re trying to do? But it’s not it’s not about evaluation of a specific set of tools either. So I think that there are lots there’s lots of room in Quality Matters universe for everybody to come in and to and take what you need to take what helps you. But we wouldn’t recommend that. You do that moving from emergency remote instruction to certification in the next semester and there is a learning curve, we all know that this is sort of jumping around.

Phil: But if you go back to the beginning, one of the first things you guys talked about was too much focus on the rubric, as if QM is all about the rubric. And the rubric by itself – correct me if I’m wrong – but is very centered on course design or [00:22:00] evaluation of course design. A lot of the discussion on Twitter was talking about, ‘well, that misses the whole element of actual teaching, what happens in the classroom.’ And then part of the issue is the pushback that quality, using the name quality to be associated too much with just course design misses a whole crucial element, the actual teaching. So, I guess my question is, what is QM’s role outside, of course design into the actual course teaching and facilitation, and the live aspect. Do you guys have a specific role there?

Brenda: Well, you know, Phil, we’re moving in that direction in terms of helping faculty with teaching online. So, we have a teaching online certificate and we also have a teaching online workshop that’s just two weeks long.

But the teaching online certificate gets at the things that are important to be [00:23:00] successful in online teaching. QM won’t be reviewing faculty teaching, and QM is not meant to be a silver bullet. If you have a QM certified course, yes, absolutely, how it’s taught has a tremendous impact on the quality of learner experience. And so, we have developed some of these things like the teaching online certificate. And in those workshops we touch on the rubric barely. But we talk about gauging your own technology skills. Are you ready to teach online? So, we get into some of the teacher readiness portion. We get into some of the orientation, connecting learning theories to your teaching strategies, evaluating your institutional policies so that you can determine, do those policies, what are they for online learners and how do I enforce them and who do I need to call [00:24:00] if I need to? Thinking about the pedagogy of the course as a from the teaching aspect, how are you going to take that course design and put it into action?

Bethany: Phil, your question also speaks to the fact that – and I’m saying this as a former face-to-face instructor who then moved online – when I was teaching face-to-face, design and teaching were pretty fully meshed together. I didn’t do a lot of proactive design, I really was uninformed about instructional design, and design approaches on online teaching or on instructional design. I was disadvantaged in that way. So I was teaching face-to-face, and I moved online … and online you can’t really design on the fly unless you’re a very experienced online instructor and you really have a high degree of skill in that type of a design [00:25:00] approach. But for me, when I first was moving online, I realized way too late that there was so much that I didn’t know, and I didn’t even know what I didn’t know, right? And the first thing that I had to tackle was really how to design this course, because I quickly realized it wasn’t just about my pedagogical approach. When you’re designing a course purposefully for the online environment, I also have to think about organizing that in an LMS. I have to think about Web design and user experience design and content strategy. I have to think about organizing my course in a logical way that allows students to move through it. I have to make sure they have access to technical support.

And those are things that happen in the design phase and need to be in that course before it starts running, and that’s separate from teaching, right? So QM and the rubric. Yes, the design rubric is focused on design, but of course, teaching is the other part of that. And as Brenda mentioned, design is only one part of [00:26:00] the overall good online learning experience that we want our students to have. I think that’s something else that I took away from that Twitter thread, that there may be faculty that are still unsure all those things that really need to go into a good learning experience for our students. It’s not just a well-designed course, purposefully designed for the online environment. It’s not just a prepared faculty member who is ready to be an effective online teacher. It’s also supporting students, and student readiness. It’s also having that institutional infrastructure and support. QM calls that the Quality Pie, and there’s certain things that we really help institutions do within that pie. But there’s also a lot that’s institutional purview. As Brenda mentioned, we are technology agnostic. So there’s lots of things that have an impact on a faculty member’s design and teaching, you know, like the LMS, like other institutional policies that are [00:27:00] separate from that from what Quality Matters helps with. But we’re moving more and more into helping faculty become better online instructors. As Brenda mentioned, we have professional development around those areas.

Brenda: You know, in the face-to-face classroom, we all know as students where we’re supposed to sit, where the teachers are going to be, that class starts at this time and ends at this time. We don’t have that framework online. It’s always there. We have to build the walls and put the seats in and develop the structures. And a lot of what General Standard One is all about is, is orienting students to that kind of situation. But you can’t do that until you have built the classroom. You can’t build the orientation until you have that done. So, the bridge is really there to support faculty, and walking through a phased approach with design steps that they can take [00:28:00]. 

Phil: To go back even further – and I appreciate the full descriptions – you mentioned at the very beginning about how part of your surprise at the conversation, or at least dismay was because you’re a nonprofit, you’re not a large for-profit EdTech company. But to help people understand, how does QM get funded, what is your monetization from an organizational perspective?

Brenda: So, we’re completely bootstrapped organization, so we are funded through memberships and fee for service. Professional development fees, and course review fees, and membership fees, are what fund QM. We don’t have any venture capitalists underwriting us. We don’t have any grant funders.

Bethany: No foundations, no state funders, no federals funders. That was that was such – I’ll be honest, Phil – it was that was a tough [00:29:00] tweet to read, because when when you are 39 people at a nonprofit that are funded by the community that you serve and the perception, you know, from people obviously that haven’t taken a moment to find out what QM is and who QM is, you are saying that it’s a big EdTech company … it’s a disservice, frankly, to the very dedicated and passionate staff that we have at Quality Matters. We are all dedicated to student success and working on behalf of our our members and the community, and we have been working as tirelessly as everybody in higher education and K-12 has, since the spring semester. We’re mission driven organization, and we are focused on supporting student success in digital learning environments.

Phil: One quick follow up on this question. You mentioned not everything’s on the website, but where is the line for what can be done for free [00:30:00] with QM materials, like seeing the course evaluation rubric, seeing the full rubric. Or where’s the free / paid divide from a school?

Brenda: If you really want to do QM, we would love to welcome you as a member, so membership gives you access to the full annotated rubric. You get access to our self-review tool, and it’s the online rubric that faculty can use to self-evaluate their own courses. You get access to our course review management system to do internal reviewing. And then at some membership levels, you can you can manage your own course reviews with appropriate professional development. So, we want the rubric to be used in spirit in which it’s intended. We teach how to use the standards, how they’re applied, how to write helpful recommendations to colleagues through [00:31:00] our professional development. As members, you get access to these things, plus member rates on our professional development, conferences, etc.

The standards are on our website to give you an idea of what we do. The intention is not for people to take them and go use the one line specific review standard because it can be misinterpreted without understanding the annotations behind it. There are a lot of free things that we offer, including, we’ve already talked about the Emergency Remote Instruction Checklist, the Bridge to Quality Course Design Guide, and we have a free Research Library, as well as an Accessibility and Usability Resource Site that’s open to everyone. That you can just go out register, and you can go out and hop in there. And the Accessibility and Usability Resource Site is [00:32:00] moderated by accessibility experts from our community. And they are sharing their expertise freely with information about how to do different things to make your courses more accessible from Universal Design for Learning to how to make a Word document accessible. If you can go in and ask a question and they will come in and answer it. We want to lift all boats. We want to help everyone.

And so we do things with, you know, such as the Bridge. It refers to specific review standards, but it doesn’t get into all of that stuff that’s behind the annotations. We are working on a version for members where we are looking at the annotations, and how do we help turn those into more design guides, because we know instructional designers are already using the standards to guide course development. And those can also sometimes help instructional designers when they’re having conversations that it’s not [00:33:00] just me saying this. There’s an organization that has a standard set that was developed by faculty and that kind of helps back up their urgings to do the things that are in the rubric that are the standards for quality online courses. Does that help?

Phil: Yes, it absolutely does. And I appreciate that full description.

Bethany: I just wanted to add one thing there. So, yeah, we do have memberships like most of the other non-profits. But I also wanted to add that a lot of the QM community creates additional things that are free to the public and to members, right? So Brenda mentioned a lot of the things that we’ve put out freely for the public, the ERIC, the Bridge Guide. We’ve done about two free public webinars per month since 2020 started, but also within the QM community, just as one example of how QM is helping institutions leverage what they’re doing and do things at a lower cost – for [00:34:00] our largest state system, QM Ohio, they created their own course review bartering system so that their membership can conduct certified reviews if they want to at no cost. So there’s lots of options to within the community. We have low cost train-the-trainer model so that if an institution wants to deliver certain QM workshops, they could train their own facilitator to do so and then deliver them for free or low cost, depending on what system they’re in.

Phil: Well, thank you. Yes, that is very helpful. And I’m glad you brought up the community resources as well. Well, listen, I’ve really appreciated your time today. This is obviously a longer episode than we normally do. But I think that this is an important topic in an important time for me. Bethany and Brenda, I really appreciate your time and your full answers on these subjects that came up and certainly wish you the best. But thank you.

Brenda: Thank you.

Bethany: Thank you so much.

Earlier this month Phil asked a question on Twitter about the growing usage of (and pushback against) faculty training based on the Quality Matters Course Design Rubric. That question led to a rich discussion – both pro and con – on the usage of the QM rubric in the attempt to improve online teaching in Fall 2020. The QM staff requested that we help with an alternate forum for them to address some of the issues raised online.

This is the first in a special series of podcast episodes on an important topic as we try to migrate from emergency remote teaching to purposely-designed quality online education.

  • 15A: Introduction of topic
  • 15B: Interview with Bethany Simunich and Brendy Boyd from Quality Matters
  • 15C: Interviews with Stephanie Moore and Jesse Stommel


Phil: Welcome to COVID Transitions, where we discuss a lot of the transition that higher education has gone through and is going through due to the Covid-19 pandemic. I’m Phil Hill, and I’m here with Jeanette Wiseman and Kevin Kelly. Earlier this month, we had an interesting situation where I put out what I thought was an innocuous tweet asking about why am I starting to see more pushback on Quality Matters and its usage during professional development this summer. I’m not arguing for or against it, but is there something that happened on why this is becoming more discussed out in the open?

For those who don’t know, Quality Matters is a non-profit organization that provides a rubric of course design standards and creates a replicable peer review process, the goals being: training and empowering faculty to evaluate courses against these standards; providing guidance for improving the quality of courses; and certifying the quality of online and blended [00:01:00] college courses across institutions. And boy, it seemed like the Twitter conversation tapped a vein. We got all kinds of conversations going back and forth, a lot of it quite emotional where you get the sense that there was really pent up feeling behind this issue, that this is a topic that’s really hitting people right now and then they’re starting to let it out. Twitter is not the best medium to explore topics in more depth, and we agreed with Quality Matters request to provide a different forum.

Hence the special podcast series. In this first episode, Jeanette, Kevin and I introduce the topic. In the second episode, I interviewed Bethany Simunich and Brenda Boyd from Quality Matters to hear their perspective directly and in depth. And the third episode, I interviewed Stephanie Moore from the University of New Mexico and Jessi Stommel from Hybrid Pedagogy as they provide a critical perspective, albeit with constructive criticism and suggestions. [00:02:00]

But before we do that, the first thing that struck me surprised me all of the responses we got, but I guess the general sense that I got on why this became a big topic is because of how Quality Matters is getting implemented, particularly now as a method for administrators and schools to try to get either control over online education, or to help them migrate what they think is moving from emergency remote teaching to true online education. So it’s becoming the tool to say we have to get all faculty doing quality online education. And it’s the way that it gets applied is a huge portion of why there’s a lot of frustration and emotion out there.

But to get started, did this discussion surprise both of you guys? And, you know, why do you think there was such a strong online discussion on this topic? [00:03:00]

Kevin: I think there’s a couple of things at play here. Right. So you pointed out that this is in response to Covid, campuses are implementing processes at higher rates of speed. So you get this combination of forces, right? It’s like the wedge in Southern California, Newport Beach, where two different vectors of waves form this massive wave. That’s really fun to ride, but really scary. And people can get hurt. You have the use of a rubric which is in and of itself, not the the challenge. And when you say innocuous tweet, I don’t know if those exist anymore.

Phil: I got accused of that, by the way.

Kevin: Yeah, but I think, you know, when Stephanie Moore brought up this being used as a cudgel to force us all to mean regression, then she’s pointing out that, hey, some people have already been working on their online courses and may not need to go through the same process. Other faculty members are being asked to do something very quickly so it can get the [00:04:00] feeling of being like a Play-Dough factory and everything is going to look the same, maybe a different color, but it’s going to be the same shape and dimension. But, you know, I’ll stop there for a second. But there’s so much to say about the difference between using a course review process to improve the experience for learners and then implementing a course review process very quickly to standardize or try to guarantee a sense of quality to, you know, assuage the fears of students and parents and but also just to comment on that.

Phil: But there’s been building thoughts and emotions on this topic over the past few months, it’s becoming apparent to me as well, people have wanted to have this discussion.

Jeanette: I wonder if a lot of this is really based on just overall frustration. To some extent. I think that there’s likely a lot of people that are trying their best to get these courses up and running. I think the ones, people that are experienced [00:05:00] with running online courses and doing instructional design and pedagogy online, and they’re comfortable with that, find these rubrics to be confining and something that doesn’t allow them to really show and teach the way they want to.

And I’m wondering if that’s where I’m seeing a lot of the pushback is those people that are just like, ‘hey, I know what I’m doing. Please don’t make me do this because you’re requiring it.’

Kevin: Well, I’d be interested to explore how many of the tweets are by a part time lecturer or faculty who could really use some guidance, full time tenure track faculty who aren’t used to having their teaching in any form evaluated by a peer, or instructional designers who are used to helping faculty through these challenges. Because I think that those different again, it’s a sense of privilege that tenure track faculty members may enjoy. ‘Hey, my courses, my course, don’t tell me how to teach it.’

And when we talk about, hey, online learners are still [00:06:00] succeeding at lower rates. We just saw all the student surveys telling us how much students weren’t engaged. And the CHLOE survey by chief online officers said the same thing, that the students didn’t feel like they had any form of engagement. It was a very flat experience. So there are ways to use rubrics as a way to make faculty aware of the most common challenges. I know at Peralta we created the equity rubric as a way to make faculty aware of the biases, assumptions and institutional barriers that affect learner motivation and achievement, according to the research. And then we created online training modules to learn about the challenges, analyze what it looks like in a real course, and build their own activities. And so when you see some of these tweets referring to the professional development, the opportunities for conversation, again, I don’t think people are questioning the use of the tools so much as as how it’s being used.

Phil: The people that I know, [00:07:00] and you know have a bias of who you really pay attention to or give credence to if you know who they are, but so much of the pushback against Quality Matters, and where it seemed to tap a vein, if you will, came from people who were experienced online teachers, or instructional designers, but people who are have been already pushing for more faculty to increase their knowledge of how to use online modality to improve teaching and learning. And you got a sense it was more like we know that we need to improve things. And I’m not just speaking for my own personal case, but I’m frustrated that the way Quality Matters is getting implemented is forcing us down a path that I already know is dangerous. And so it’s the over application. I think that there is a pretty good sense that I saw a lot of very thoughtful responses where people were answering not just for themselves, but for what they felt [00:08:00] is needed in the in the market, if you will.

But at the same time, you make an excellent point. A lot of this is how it gets implemented. It’s like, well, somebody said it’s unfortunate that it’s called Quality Matters because it implies course quality comes from the usage of this rubric. So if you want quality online education, as in moving from emergency remote to online education, here’s how you do it. And so you take that sort of mentality, and then you use it as a cudgel to make everybody fall in line. And so I saw a lot of the pushback was argued not on the concept, some was on the concept of the work, but a lot more was on how it gets applied at schools, perhaps overzealously, how a rubric should be viewed. Is it a minimum set of standards? This is a guide to make sure you think of certain aspects, or is it the way to get quality into a course?

Kevin: Well, I’d see [00:09:00] it as a scaffolding device to help online instructors who might be newer to the process begin to improve the course experience based on what the research shows. And so those professional development opportunities that a lot of tweets described in both in response to you and Deb Adair are one place for those conversations to take place. But peer review processes are another where you can have much more in-depth conversations because you’re walking through your course and talking about why you’re doing certain things and you have a chance to hear from a veteran online instructor the way they do things. And so it’s more along the lines of what Jesse Stommel was saying. And others, maybe Peter DeCourcy, who were talking about conversations being part of the process instead of just having these ‘run people through the grist mill.’

Jeanette: I mean, to that point, Kevin, how often do you think that scaffolding is happening? And isn’t that what people are pushing back against?

Kevin: It’s probably the case that because you have to scale up your trainings [00:10:00] over the spring and the summer, that you’re probably going to have less time for conversations, because you’re just putting as many people through some sort of preparation as possible. We saw, and I think it was the CHLOE survey, that people are averaging somewhere between 20 and 40 hours of formal prep for the fall, which is more than a lot of instructors prep for. And then, you know, period, they don’t they don’t get pedagogical training and in this way.

Phil: But I do want to separate, because I think we can explore both. There’s a question not just how could it be used, but how it is being used. And that’s where I saw a lot of the argument saying, no, it’s not being used as scaffolding, it’s being used as the be all and end all. That’s a problem. Now, how can it be used or how should it be used as a separate question?

And to be fair, Deb Adair jumped into the discussion, and she mentioned some [00:11:00] things she had said. ‘It’s not an endpoint. It’s a beginning. It’s always been about being better than good enough. And that’s trying to say, let’s raise the floor for all of it. But that doesn’t mean that talented instructors and instructional designers can’t go further. So it’s ensuring the basic design is in place and helps all students to be successful.’ That was one of the things that she was arguing and the initial feedback.

However, I think there there’s an organizational issue that they need to be careful about as well, because the initial feedback that was happening both privately and within the discussion was, ‘oh, that’s just a bunch of grumbling faculty, that’s the five percent and they’re just grumbling about what they want. They don’t want any standardization, whatever students have.’ But so there was sort of this organizational pushback that they need to be careful about, because most of the comments I saw were not [00:12:00] of that vein of ‘I only care about myself.’ I think it was more people – and you mentioned Jesse Stommel – where they really have thought about what this means broader for education and what’s best for students. So I guess I’m cautioning, or hope they don’t interpret this too much as a bunch of rabble rousers, as opposed to a legitimate discussion that needs to be had and can lead to improvement.

Kevin: Well, I liked how some people point out – it might have been Kelvin Bentley, I’m not sure – but that we need to also be looking at facilitation. It’s something I brought up in that three part series about online course design rubrics on e-Literate that we use these rubrics for the course design process. But very few of the seven major rubrics out there and look at the facilitation process. And that’s actually just as important when you get the students in the online classroom. How are you engaging them and making sure that you’re assessing their learning and authentic ways?

Phil: Rob Gibson, he had jumped in, and I [00:13:00] believe he’s led a lot of Quality Matters training. So, he’s seen how it can be applied based on his direct experience. And he had a lot of good points talking about how the potential of it, that it really can improve teaching and learning. He brought up an issue about accessibility. Here’s a great tool to really force people to deal with accessibility comprehensively through a course. And if you just throw away the baby with the bathwater, where else are you going to get some of this advice to ensure that people aren’t ignoring issues? Kevin, you mentioned the Peralta rubric. Same issue there. If you throw out the baby with the bathwater, do you have another tool that does a better job of making sure that people think of the equity issues involved in education? So I thought that Rob had some very good points about how it can be used. And also almost a caution of if you throw it out, then how are you going to deal with some of these subjects?

Kevin: Right. And [00:14:00] I think it boils down to when groups create these instruments and processes there, they’re done with the right intentions. And so people have to be careful in how they’re applying them, even in cases of emergency. And so I think the conversation is a very good one. And it’s interesting that it took things boiling to a head during a time of crisis for it to emerge into the public speech scene.

Phil: So with that in mind, that’s part of the reason: Let’s take it’s a valuable conversation. Excellent point that you and others have made about Twitter is not the most innocent place to have a conversation. So that’s what we’d like to do. So that so we’re doing a podcast interview to allow two of the thoughtful leaders and the different sides point of view. Let them debate some of these issues in more depth on an important topic and get it out of the Twitter discussion. But looking [00:15:00] forward to hearing from them. And we will definitely like to also discuss what they’re saying, but also the general subject of rubrics, not just about Quality Matters and what the role can be.

Thanks for prepping the field, if you will, Kevin and Jeanette.