Earlier this month Phil asked a question on Twitter about the growing usage of (and pushback against) faculty training based on the Quality Matters Course Design Rubric. That question led to a rich discussion – both pro and con – on the usage of the QM rubric in the attempt to improve online teaching in Fall 2020. The QM staff requested that we help with an alternate forum for them to address some of the issues raised online.
This is the second in a special series of podcast episodes on an important topic as we try to migrate from emergency remote teaching to purposely-designed quality online education.
- 15A: Introduction of topic
- 15B: Interview with Bethany Simunich and Brendy Boyd from Quality Matters
- 15C: Interviews with Stephanie Moore and Jesse Stommel
Podcast: Play in new window | Download
Subscribe: Apple Podcasts | Google Podcasts | RSS | More
Transcript:
Phil: Welcome to a special episode of COVID Transitions. I’m Phil Hill, and recently we had an interesting Twitter conversation, or as interesting as you can get on Twitter, that involved Quality Matters and the usage of the rubric and how schools are using it to try to transition from emergency remote learning to online education, to put back into things that we know how to do in online education and improve quality. And it was a fascinating conversation, but it was on Twitter, which is very limited. We have Brenda and Bethany from Quality Matters, and they’ve agreed to join us so we can have more of an extended conversation and get to these important topics. But give give people more chance to discuss things in depth. So I’m with Brenda Boyd, senior academic director of program services at Quality Matters, and Bethany Simunich, director of research and innovation [00:01:00] at Quality Matters. It’s great to have you both here. And Brenda, welcome to the show.
Brenda: Thanks, Phil. It’s great to be here. We’re glad that we have this opportunity to have this conversation and to talk a little bit about some of the things that we saw and some of the arguments that are being made and how we were thinking. Well, some of those things are misconceptions. And so we’d really appreciate the opportunity to clear some of those things up today.
Phil: Great. And also, Bethany, welcome, and glad to have you here, as well as our first – you and I were discussing – our first external interviews as part of this podcast, but welcome.
Bethany: Yes. Yes. Thank you so much for having us. And as I mentioned to you before we begin, it’s very hard to have good conversations on Twitter. So I appreciate you giving us this opportunity to hopefully start a better conversation and to really bring some some new ideas into Quality Matters, and the discussion surrounding them.
Phil: And [00:02:00] this will be interesting, we’re going multimodal, switching from Twitter and moving to podcast, then having a blog post attached to it to jump in. As I said, a lot of the trigger for this conversation came from Twitter. Basically I was asking the question, ‘I’m starting to see a lot of pushback and commentary on the usage of Quality Matters in terms of schools trying to do quality assurance or manage the transition from emergency remote teaching to online teaching.’ And we just got a plethora of responses back, both positive and negative. But it raised some subjects that we wanted to get deeper into. Were you guys surprised with the general feedback of the discussion and on the rubric and course redesign process? And there seemed to be a lot of emotions involved.
Bethany: Well, I think emotions are high right now in general, but my general feedback and the initial [00:03:00] reaction were actually very similar. On one hand, a little bit of a disappointment at the misconceptions that are presented as facts, but also tempered that with some excitement and even a little bit of hope for renewed conversations about improving how QM helps faculty meet their goals. So, I think that in some sense, this conversation about quality in online learning, so not even specific to QM, but quality and online learning, is coming at a time where you have an entirely new group of faculty that are moving to the online environment. So, we know that we had the remote shift in the pandemic and for a lot of faculty, they were spending time this summer trying to improve the quality of what they did in the spring and to build on those successes, because I think a lot of us anticipate that we’re going to be shifting back in the fall. You’ve already commented on that. So, I’m sure they already plan to be teaching online. So, if this was your first foray into online teaching and learning, that was a big [00:04:00] gap for you to fill, right?.
So, there was a lot of time spent in spring and summer about faculty development. And with that came this conversation about QM and quality, because a lot of institutions use QM. So, I think that QM, though, it is way too often distorted and just seen as the rubric. And in truth, Phil, that really becomes a straw man fallacy. It does surprise me that academics – because we are trained to inquire and to analyze and critique something only after we understand it – that there would be criticisms for something that obviously there’s for certain people only of a certain surface level understanding of, right? So I’d like to actually use that Twitter thread, though, as a way to open up broader conversations with the community about quality in online learning, because the rubric is just one of the tools that QM has. It’s one of the core tools in helping institutions set up an integrated and sustainable quality [00:05:00] assurance process. And that’s the point of QM to help institutions continually improve their online and digital education and their student focus learning goals. Brenda, what was your feedback?
Brenda: So, I would have to say that I was also surprised about the the pushback about QM on the Twitter threads that I saw, and I was also kind of heartbroken. I got to be honest, because Quality Matters is not a behemoth EdTech company. We have thirty nine people who work for our little nonprofit and are very passionate about what we do and very passionate about improving online learning. And so it was really, I would even say, hurtful some of the things that were said, because we’re all on the same side. We all want to help students and [00:06:00] we are at a point in time where a lot of people who never thought they would be teaching online have been thrust into that. And institutions are challenged to support faculty in doing that. I think what a lot of people may not understand is that QM is the connector. The rubric isn’t developed by Quality Matters staff, it’s developed by the community. So, we have rubric committees that are composed of faculty, who have experience with online teaching. We get input from our community. We analyze the data from the course reviews. Initially it was pretty interesting, because QM is faculty driven. Faculty are the peers doing the reviews. QM staff do not do course reviews. So, there were some real interesting assumptions that were made. We call them peer reviewers [00:07:00] because they are faculty peers who are there to help their colleagues improve through the continuous improvement process. That is, of course, reviews.
Phil: But I did see quite a few comments that acknowledged, or pointed out the role of the college or university. And a lot of what I saw was how much it gets used as a QA tool or a cudgel to use in the process. Let me take a step back just a little bit. Which parts of the conversation in your mind do you see as, ‘oh, that’s a legitimate subject that we need to deal with’ versus the mischaracterization. You’ve already mentioned one of the main mischaracterizations is equating QM with the rubric only.
Bethany: I think there were there were several legitimate points that were being made. One of them is how QM interfaces with an institutional implementation of quality assurance around online learning. So I think that that’s a legitimate [00:08:00] conversation, not from the standpoint of, ‘oh, that’s all the institution’s purview, and what happens there is just what the institution happens, and QM is separate from that.’ But in the sense of how can we have better conversations for how to involve the right people at the right time, get the right individuals at the table for things like quality assurance implementations? How can we have better conversations and use tools better so that this is not always this top down initiative? And so that quality is continuously focused on student success – as QM positions it – and not weaponized against faculty? So I think that it really brought to light that there are ways and needs for having these better conversations with those that do the implementation of this work. I think all too often faculty are left out of that conversation. Instructional designers may be left out of that conversation. [00:09:00] I think this is a legitimate conversation to have.
The other thing that I thought was a legitimate point that I’m actually really excited to have some conversations about. And you had mentioned that you have an upcoming podcast with Steph Moore, and we plan to talk with her as well. So she brought up some good points, as did some other faculty, for the flexibility surrounding the rubric. Does it always work for every design approach? I think that’s a legitimate and good conversation to have. And that also, though, dovetailed with some of the other misconceptions that came out. So in reading that thread, it became apparent to me that a lot of faculty are unaware that there’s lots of different types of reviews that you can do, including MyCR, (My Customer Reviews), so that you can customize the rubric to meet your own faculty, your institutional needs. So I kept trying to ask the question, well, what’s the goal? You know, so for faculty that maybe feel, my course [00:10:00] doesn’t work with QM, or they’re really unfamiliar with the ways that QM can support the work that they do. What’s the goal? Is the goal for continuous quality improvement within that course or for that faculty member and their students? Is it a larger institutional goal that’s tied perhaps to accreditation?
Phil: Sure. Do you mind if I just jump into a very specific point, because you raised that about the flexibility, because I saw this in several comments about the 42 elements. And do you have to go through every one of them? You mentioned flexibility of usage. Should people use these 42 elements as you have to go through every one or use them in the same way, or what’s the right way to think of that?
Brenda: Well, I think that, you know, they’re the QM higher education rubric does have 42 specific review standards. They are organized into eight general standard areas. So those general standard areas include things that no one would disagree with, like providing [00:11:00] support to students or telling students where to go and what to do first, and introductions and overviews. To your question, where do they begin, or how much flexibility is there? It goes back to what Bethany is saying about the goals, like what are they trying to do? Are you trying to get all of your courses certified or are you getting your program certified by QM? What is it that you’re trying to actually accomplish? If your goal is to just improve the quality of your courses and maybe you’re not ready to have everything certified by QM, maybe there’s a subset of standards or selected standards that you want to use. So, we have tools that enable the modification of the rubric. One is called My Custom Reviews, and it enables an institution to take the rubric, copy it into this tool, and then they can take out the standards that they don’t want to address. And they also have the ability to add their own things into this rubric. [00:12:00] The rubric is designed to be interrelated and holistic. So, there are standards that refer to other standards and the list of specific review standards that are on the website is not the entirety of the rubric. That just lists the specific review standards. They are intentionally succinct. There is a whole bunch of annotations behind them that members have the ability to use and integrate into their professional development to use as examples and best practice, et cetera.
Phil: If you don’t mind me jumping on it to me that the theme that I saw it was how it’s being used here in 2020, there was a common theme about schools using it as a QA device, saying, ‘OK, we’re going online, everybody’s got to meet these standards or here’s how we’re defining quality.’ That was the usage is the QA method to enforce [00:13:00] a transition from emergency remote without standards to quality online learning. Are you seeing that as a common theme? And what is your commentary about if that’s an appropriate usage?
Brenda: We’re not seeing institutions coming to us saying we want to certify this whole program before fall begins. Some institutions maybe saying these are a quality assurance metrics. This is what we’re moving toward. But we haven’t seen like a huge influx of new course reviews. And honestly, we would not recommend moving from emergency remote instruction to the full blown rubric, which is why we developed the Bridge to Quality, which is the online course design guide. And it’s on our website. It’s free. It’s open to everybody. And we’ve been working this summer, Bethany spearheaded the Emergency Remote instruction Checklist to help faculty transition through the pivot, and then our next step was how do we help them move- exactly your question – from [00:14:00] this emergency remote instruction environment toward the quality standards? We knew that there was going to need to be steps in between there, that it’s unrealistic to take the course design rubric that’s designed to review courses that have been taught a couple of times and have had the opportunity to work the kinks out, too. We wouldn’t expect to dump 42 standards into your lap and expect to be magically met at this point in time. It’s just not realistic. And we have seen a huge influx of professional development over this summer to help faculty move toward online. Even in our own Designing Your Online Course workshop, we don’t address 42 standards and we look at our Essential standards. We look at that backward design approach, looking at the alignment of the learning objectives. Are your assessments measuring your objectives? Because [00:15:00] there are a lot of faculty out there who may not have thought about these things before. And in the end, this moment gives them that opportunity to kind of focus on what are what are really the outcomes for my learners and how am I enabling them to get there.
Bethany: Yeah, and just to piggyback on that for a second, so I think regardless of the tools and the processes used, I don’t think there’s a one-size-fits-all approach here when we’re in this very difficult situation with the spring pivot and potentially a fall pivot. So I think most colleges and universities are facing a very big task of trying to do remote learning at a minimum level of quality, because we know at the very least that we, for our students, have to do a little better job than we did in the spring if we end up having to pivot in the fall term as well. And there’s lots of tools to help get there. But the rubric is a tool for evaluation, [00:16:00] and that’s not where most remote courses are. The big difference between remote courses and online courses are that online courses are those that are purposefully designed for the online learning environment, whereas remote courses are focused on recreating the Face-To-Face experience at a distance, and that those are two very different goals. So that’s also why I kept asking the question about what is your institutional goal? What is your faculty goal? So when the emergency room pivot happened, that’s when we created, as Brenda just mentioned, the Emergency Remote Instruction Checklist.
And that was designed to highlight what faculty pivoting to remote should concentrate on first and really how to connect with your students. How could they how they could improve the remote learning experience as the semester continued and as they had a little bit more time. But as Brenda mentioned, we’re a small staff. We worked nights and weekends to create that as a public free resource. And we knew then that there would there’s a gap between [00:17:00] that and where faculty institutions want it to be for the fall. And again, that’s why we created the Design Guide. And again, that’s a free public resource. But we also created that understanding that the rubric is not a design guide, it’s an evaluation tool. That goes and harkens back to the Twitter thread as well. There’s a big misperception that the rubric is a design guide rather than an evaluation tool. And I think that hits at the core of why this came out, and to your question, should institutions be using a rubric to get to where they want in terms of their their remote course quality?
Phil: But you mentioned the one size doesn’t fit all, and a lot of it gets to teachers who have never gotten into this before. And it raises the question, what is the sweet spot of what QM is designed for in terms of faculty experience?
Bethany: Yeah, that’s a good question. And I’m going to actually [00:18:00] let faculty speak for themselves on this one, because as far as the research person in QM, I do know that the data on the faculty that interact with us and their experience in our in our professional development. And because we regularly pull this data to see how faculty are responding and how they’re interacting. So, for some of our most popular workshops, so those on teaching online, designing your online course, our flagship workshop on applying the rubric, faculty have a high degree of satisfaction regardless of their experience level. So, for the design workshop, for example, those faculty that are newer to online design and teaching, they have a satisfaction rate of 95%. For those that have eight or more years of experience, so those that I would consider very experienced online instructors, their satisfaction is 93%. That’s one way to really say that the QM community is a home for everybody, because what those more experienced faculty [00:19:00] bring to the table is they also bring that knowledge to bear on peer reviews, for example, if they’re serving as a team chair and the mentoring that happens between themselves and the other peer reviewers. But there’s also opportunities for them to share within workshops their additional expertise, and also to help QM expand our own thinking with their innovative design and teaching approaches, with those faculty that are newer to online teaching. And as we just talked about, right now, we have a whole new section of faculty that really have never come to the table to talk about online teaching or to have a chance to really practice that at their institution. It’s a much larger community than it ever was. And I think really ready for even more the the benefits and the experience for those that have been doing this for years.
Brenda: I would agree, we see in our professional development faculty with no experience, faculty of 20 years’ experience, I just saw a tweet yesterday by a faculty member who was like, ‘I’m [00:20:00] teaching online for 20 years, and I took this professional development workshop and I feel like I’m really going to help my students this fall.’
So, I think that there’s a wide range that we have entry points for everyone. Our intention really is to have an open and collegial dialogue, let’s be honest, faculty members don’t necessarily get any professional development on how to teach online unless their institution provides a forum, or they went through an educational technology instructional design program, or they were lucky enough to have a teaching assistantship or a graduate assistantship that gave them some teaching and pedagogical training in their graduate programs. And I agree that there are a lot of different approaches to doing so. And we want to be clear that this is not about cookie cutter courses, because while there are 42 standards, there [00:21:00] are many ways to meet them.
We’re technology agnostic. So whatever tools you’re using or the tools you’re using, we, you know, we do look to see does this help support what you’re trying to do? But it’s not it’s not about evaluation of a specific set of tools either. So I think that there are lots there’s lots of room in Quality Matters universe for everybody to come in and to and take what you need to take what helps you. But we wouldn’t recommend that. You do that moving from emergency remote instruction to certification in the next semester and there is a learning curve, we all know that this is sort of jumping around.
Phil: But if you go back to the beginning, one of the first things you guys talked about was too much focus on the rubric, as if QM is all about the rubric. And the rubric by itself – correct me if I’m wrong – but is very centered on course design or [00:22:00] evaluation of course design. A lot of the discussion on Twitter was talking about, ‘well, that misses the whole element of actual teaching, what happens in the classroom.’ And then part of the issue is the pushback that quality, using the name quality to be associated too much with just course design misses a whole crucial element, the actual teaching. So, I guess my question is, what is QM’s role outside, of course design into the actual course teaching and facilitation, and the live aspect. Do you guys have a specific role there?
Brenda: Well, you know, Phil, we’re moving in that direction in terms of helping faculty with teaching online. So, we have a teaching online certificate and we also have a teaching online workshop that’s just two weeks long.
But the teaching online certificate gets at the things that are important to be [00:23:00] successful in online teaching. QM won’t be reviewing faculty teaching, and QM is not meant to be a silver bullet. If you have a QM certified course, yes, absolutely, how it’s taught has a tremendous impact on the quality of learner experience. And so, we have developed some of these things like the teaching online certificate. And in those workshops we touch on the rubric barely. But we talk about gauging your own technology skills. Are you ready to teach online? So, we get into some of the teacher readiness portion. We get into some of the orientation, connecting learning theories to your teaching strategies, evaluating your institutional policies so that you can determine, do those policies, what are they for online learners and how do I enforce them and who do I need to call [00:24:00] if I need to? Thinking about the pedagogy of the course as a from the teaching aspect, how are you going to take that course design and put it into action?
Bethany: Phil, your question also speaks to the fact that – and I’m saying this as a former face-to-face instructor who then moved online – when I was teaching face-to-face, design and teaching were pretty fully meshed together. I didn’t do a lot of proactive design, I really was uninformed about instructional design, and design approaches on online teaching or on instructional design. I was disadvantaged in that way. So I was teaching face-to-face, and I moved online … and online you can’t really design on the fly unless you’re a very experienced online instructor and you really have a high degree of skill in that type of a design [00:25:00] approach. But for me, when I first was moving online, I realized way too late that there was so much that I didn’t know, and I didn’t even know what I didn’t know, right? And the first thing that I had to tackle was really how to design this course, because I quickly realized it wasn’t just about my pedagogical approach. When you’re designing a course purposefully for the online environment, I also have to think about organizing that in an LMS. I have to think about Web design and user experience design and content strategy. I have to think about organizing my course in a logical way that allows students to move through it. I have to make sure they have access to technical support.
And those are things that happen in the design phase and need to be in that course before it starts running, and that’s separate from teaching, right? So QM and the rubric. Yes, the design rubric is focused on design, but of course, teaching is the other part of that. And as Brenda mentioned, design is only one part of [00:26:00] the overall good online learning experience that we want our students to have. I think that’s something else that I took away from that Twitter thread, that there may be faculty that are still unsure all those things that really need to go into a good learning experience for our students. It’s not just a well-designed course, purposefully designed for the online environment. It’s not just a prepared faculty member who is ready to be an effective online teacher. It’s also supporting students, and student readiness. It’s also having that institutional infrastructure and support. QM calls that the Quality Pie, and there’s certain things that we really help institutions do within that pie. But there’s also a lot that’s institutional purview. As Brenda mentioned, we are technology agnostic. So there’s lots of things that have an impact on a faculty member’s design and teaching, you know, like the LMS, like other institutional policies that are [00:27:00] separate from that from what Quality Matters helps with. But we’re moving more and more into helping faculty become better online instructors. As Brenda mentioned, we have professional development around those areas.
Brenda: You know, in the face-to-face classroom, we all know as students where we’re supposed to sit, where the teachers are going to be, that class starts at this time and ends at this time. We don’t have that framework online. It’s always there. We have to build the walls and put the seats in and develop the structures. And a lot of what General Standard One is all about is, is orienting students to that kind of situation. But you can’t do that until you have built the classroom. You can’t build the orientation until you have that done. So, the bridge is really there to support faculty, and walking through a phased approach with design steps that they can take [00:28:00].
Phil: To go back even further – and I appreciate the full descriptions – you mentioned at the very beginning about how part of your surprise at the conversation, or at least dismay was because you’re a nonprofit, you’re not a large for-profit EdTech company. But to help people understand, how does QM get funded, what is your monetization from an organizational perspective?
Brenda: So, we’re completely bootstrapped organization, so we are funded through memberships and fee for service. Professional development fees, and course review fees, and membership fees, are what fund QM. We don’t have any venture capitalists underwriting us. We don’t have any grant funders.
Bethany: No foundations, no state funders, no federals funders. That was that was such – I’ll be honest, Phil – it was that was a tough [00:29:00] tweet to read, because when when you are 39 people at a nonprofit that are funded by the community that you serve and the perception, you know, from people obviously that haven’t taken a moment to find out what QM is and who QM is, you are saying that it’s a big EdTech company … it’s a disservice, frankly, to the very dedicated and passionate staff that we have at Quality Matters. We are all dedicated to student success and working on behalf of our our members and the community, and we have been working as tirelessly as everybody in higher education and K-12 has, since the spring semester. We’re mission driven organization, and we are focused on supporting student success in digital learning environments.
Phil: One quick follow up on this question. You mentioned not everything’s on the website, but where is the line for what can be done for free [00:30:00] with QM materials, like seeing the course evaluation rubric, seeing the full rubric. Or where’s the free / paid divide from a school?
Brenda: If you really want to do QM, we would love to welcome you as a member, so membership gives you access to the full annotated rubric. You get access to our self-review tool, and it’s the online rubric that faculty can use to self-evaluate their own courses. You get access to our course review management system to do internal reviewing. And then at some membership levels, you can you can manage your own course reviews with appropriate professional development. So, we want the rubric to be used in spirit in which it’s intended. We teach how to use the standards, how they’re applied, how to write helpful recommendations to colleagues through [00:31:00] our professional development. As members, you get access to these things, plus member rates on our professional development, conferences, etc.
The standards are on our website to give you an idea of what we do. The intention is not for people to take them and go use the one line specific review standard because it can be misinterpreted without understanding the annotations behind it. There are a lot of free things that we offer, including, we’ve already talked about the Emergency Remote Instruction Checklist, the Bridge to Quality Course Design Guide, and we have a free Research Library, as well as an Accessibility and Usability Resource Site that’s open to everyone. That you can just go out register, and you can go out and hop in there. And the Accessibility and Usability Resource Site is [00:32:00] moderated by accessibility experts from our community. And they are sharing their expertise freely with information about how to do different things to make your courses more accessible from Universal Design for Learning to how to make a Word document accessible. If you can go in and ask a question and they will come in and answer it. We want to lift all boats. We want to help everyone.
And so we do things with, you know, such as the Bridge. It refers to specific review standards, but it doesn’t get into all of that stuff that’s behind the annotations. We are working on a version for members where we are looking at the annotations, and how do we help turn those into more design guides, because we know instructional designers are already using the standards to guide course development. And those can also sometimes help instructional designers when they’re having conversations that it’s not [00:33:00] just me saying this. There’s an organization that has a standard set that was developed by faculty and that kind of helps back up their urgings to do the things that are in the rubric that are the standards for quality online courses. Does that help?
Phil: Yes, it absolutely does. And I appreciate that full description.
Bethany: I just wanted to add one thing there. So, yeah, we do have memberships like most of the other non-profits. But I also wanted to add that a lot of the QM community creates additional things that are free to the public and to members, right? So Brenda mentioned a lot of the things that we’ve put out freely for the public, the ERIC, the Bridge Guide. We’ve done about two free public webinars per month since 2020 started, but also within the QM community, just as one example of how QM is helping institutions leverage what they’re doing and do things at a lower cost – for [00:34:00] our largest state system, QM Ohio, they created their own course review bartering system so that their membership can conduct certified reviews if they want to at no cost. So there’s lots of options to within the community. We have low cost train-the-trainer model so that if an institution wants to deliver certain QM workshops, they could train their own facilitator to do so and then deliver them for free or low cost, depending on what system they’re in.
Phil: Well, thank you. Yes, that is very helpful. And I’m glad you brought up the community resources as well. Well, listen, I’ve really appreciated your time today. This is obviously a longer episode than we normally do. But I think that this is an important topic in an important time for me. Bethany and Brenda, I really appreciate your time and your full answers on these subjects that came up and certainly wish you the best. But thank you.
Brenda: Thank you.
Bethany: Thank you so much.