Cengage to launch Student Assistant, a bot to help you understand your text

By Darren Johnson
Campus News

Normally a journalism story shouldn’t break into “first person,” but sometimes using “I” is needed if the writer brings a certain expertise or perspective to the piece and also participated in the story in some way.

Recently, I spoke with a couple of the experts at Cengage, the huge, century-old higher ed company previously known largely for physical textbooks but now they are almost wholly virtual. They’re beta testing something called Student Assistant; it’s AI-powered and sure to transform the teaching of at least lower level college courses in the very near future. Student Assistant will go live this coming spring.

I also bring myself into this story because I’m an instructor myself – Communications/Journalism – and the use of AI is being heavily debated among faculty. I belong to several social media groups that debate the pros and cons of using AI in higher education. (Personally, I see some positive applications for it, and believe it’s on the instructor to not let students use it to “cheat.”)

Via Zoom, I recently spoke with Cheryl Costantini (CC), SVP of Product Management at Cengage Academic, and Michelle Gregory (MG), Head of Data Science, AI and Innovation at Cengage Group about the Student Assistant, and was able to test it out.

Currently, some professors in areas like economics, management and psychology have been using it in the teaching of their courses, and feedback has reportedly been very positive.

In case you’re unaware, many professors – especially in lower-level undergraduate courses – no longer just assign a textbook for a student to buy and digest in their own time. Instead, classes partner with a program like Cengage to offer virtual versions of the texts that are much more interactive and embedded into the college’s learning management system (LMS) modules for that course.

Student Assistant now seems to take that to the next level. While within a Cengage learning module, students can be quizzed and ask questions of the AI.

The Cengage representatives assured that students can’t “cheat” with it. The Student Assistant is trained to get the student to think critically toward finding a solution to the question on their own.

One sample question I tried had to do with supply vs. demand; why prices may rise or decrease in a certain environment. The Assistant did a good job at asking me questions so that I could come up with the right solution.

Costantini and Gregory said that this AI is limited to the textbook and the writer of the text’s knowledge base, so won’t go outside that sphere to get other answers, that may be potentially false.

But at the same time, the AI did have that fun conversational flavor AIs tend to have, to help build a rapport with the user. It was able to joke around when a totally unrelated prompt about Taylor Swift was inserted, and then try to nudge us back on track.

It’s also able to interpret spelling or grammatical mistakes; this is good as not all college students are native English speakers (currently, the AI is only in English, but developers seem open to using other languages down the road).

I asked if an instructor could insert their own knowledge into the AI – perhaps the instructor knows something the textbook author doesn’t – but that’s not a feature yet. Still, an instructor could write their own tests in the LMS with a combination of questions from the text and their lectures.

Personally, I think I will consider using this for my lower-level classes that would normally use a textbook, when Cengage creates a Student Assistant for these courses; say, Introduction to Mass Media or Visual Culture.

Here is our Q&A:

DJ: We’re going through some rapidly changing times in higher education with the emergence of AI. There are pros and cons. How are you ensuring you’re doing this the right way?

CC: It is a really interesting but also confusing time because we aren’t really sure. We think there is potential to improve learning and to maybe save some time for faculty, but we’re also very cautious about it. At the same time, we are thinking there are privacy concerns, ethical matters, the integrity of the content.

Something that’s really important to us at Cengage is the quality of the content because we are helping to educate students and we want to make sure that it is the right content and that it is accurate and that it is reliable.

And so we were really grappling with a lot of those questions that I think many people across academia are also grappling with.

But we still knew that there is promise there. And so we brought Michelle on board, who had all of this experience to really help us think through how should we think about a solution, and what problems are we trying to solve.

MG: I think what’s exciting about this product is we have tested virtually all the large language models that are out there: Claude, ChatGPT 4, ChatGPT 3/5, and as new ones come out. We can test them instantly with the, infrastructure that we have. But importantly, this bot only points to the textbook that you see here. It doesn’t get any extra information from the large language models.

We have it walled off for that reason and to train the models, we actually used instructors of these textbooks. They rate it on a five scale rubric, rate each individual dialogue that has come through.

That’s how we trained it for how we want it to behave. And early on, it was the case that it was giving the right answer too often. And the instructor said, “No, I can’t do that.”

So we were able to go in technically and retrain and retrain it, so it never gives the right answer. Being able to point only to that text gives the instructors confidence that it brings in no new material, no unknown materials being added to the learning.

And it also has the personable kind of interaction; we’ve had some great feedback from instructors as this is how I would respond as an instructor, which is not surprising given that they helped train it. It was really important to us to get that kind of feedback on how they want to use it.

 

DJ: Even though you’re officially launching this spring after testing, this AI is something you will keep fine-tuning?

MG: We’ll be engaging with both faculty and students throughout the process to understand their experience. You know the reason for the beta was really to do a few things. One was to understand what it would take for us to understand the capabilities we need internally to develop this. That meets our level of quality and at scale, right?

So there’s sort of technical feasibility and content feasibility that requires us to work with content differently than we normally do.

But then there’s also all the other external market oriented things that we wanted to test. Does this help improve student learning? What are faculty’s perception of it? What are students perception of it do? Is there a demand for more? And so all of those things, all of those questions, will be answered in the context of the beta, and then that will inform what we do going forward.

 

DJ: I like the way it looks. So you see AI as something to embrace. And something that is going be a part of your business plan for the future?

CC: Definitely in the way that it can help improve learning, and it can solve productivity needs. When you think about faculty, as you know, they’re always busy. And if there’s anything that AI can do to remove some of the burden off of them and help them coach and guide and support their students in learning, that’s great.

 

DJ: So far, your beta testers have been professors whom I would assume are AI-friendly. When you launch, how will you handle those professors who may be more resistant to it?

CC: I’d say that it’s very clear from students, and we had a recent employability survey that said this, that students are wanting to better understand how to use AI in all aspects of their lives.

The need for AI literacy is really high. And students right now are feeling unprepared to start their careers without understanding the safe and responsible use of AI in various aspects of what they do.

So, because I think AI is here to stay– it’s a part of their lives – I think it’s important for us to help students at this point in their lives to understand ways they can use it safely and responsibly; in ways that will actually help them.

MG: Yes, and it will be part of their jobs in the future. No matter what job they go into, marketing, math, data science, economics, you know every company uses it somehow, and they do have to be prepared. I would add one thing to that. I would also say for people there’s a lot of reasons to be wary of AI, as you pointed out, when you use a large language model, it’s basically indexing. A lot of the web and, we all know you can’t trust everything you read on the web, it’s not indexing, just embedded sites. So having those instructors be a part of the process so they can trust it is key. The Student Assistant only points to this text. It only interacts in a way that as an instructor you find useful. Bringing instructors into that process I think is also useful.

Facebook Comments

About the author

Contact us to write for us or to advertise!

Leave a Reply

Your email address will not be published. Required fields are marked *