A Push Button Education: The Role of AI in Higher Education

Jean-Marc Côté, At School

Jean-Marc Côté, At School

For well over a hundred years, various futurists have given us vivid illustrations of what the future of a technologically enhanced education will look like. One of the earliest of these illustrations is Jean-Marc Côté’s depiction of the 21st century classroom, At School, displayed as a part of the 1900 Paris World Exhibition. Although his depiction is a bit cheeky—no one seriously thought that books would be fed into a hopper that would somehow magically transmit the information to students via headphones—it does speak to this emergent belief that the future of education will somehow leverage technology for mass education.

Author Radebaugh, A Push Button Education

Author Radebaugh, A Push Button Education

Côté was not alone in his fascination with the role technology might play in education. During the late 1950s and early 1960s, American illustrator Author Radebaugh produced a series of weekly syndicated illustrations depicting what 21st century life would look like. And although some of those illustrations were fairly far off the mark (none of us are taking vacations on the moon), others bear a striking similarity to current technologies—especially his depiction of the future of education. According to Radebaugh’s vision, the “push-button education” of the future would allow one instructor, through a series of “sound movies” and “mechanical tabulating machines” to more efficiently teach a large number of students. Central to Radebaugh’s vision was the idea that these machines would be individually tailored to the unique needs of each student and adapt to the student’s learning style and level. And if Radebaugh’s vision sounds familiar, it should. In essence, Radebaugh depicted a future where machine learning and augmented intelligence would play center stage.[1]

Both critics and advocates make much of the role of technology in education—especially the extent to which augmented intelligence (AI) should play. The current use of AI cuts across all aspects of the educational enterprise. AI is routinely used to manage facilities, enhance security, and streamline business processes as well as target advertising, determine probably success of applicants, and improve student learning and retention. And for good reason. Skyrocketing college costs have far outstripped inflation over the last several decades and we face a completion crisis where only about half of all students complete their degrees. If AI can help students succeed and bring down costs through improved efficiency then why shouldn’t it be used?

Gartner's Hype Cycle

Gartner's Hype Cycle

There are a number of challenges the use of AI creates in an educational environment. In a 2017 article for MIT Technology Review, Rodney Brooks outlined seven critical problems with the predictive uses of AI—several of which are particularly important as we consider the use of AI in higher education. Topping Brooks’ list is Amara’s Law: We tend to overestimate the effect of technology in the short run and underestimate the effect in the long run. Or in the Gartner Hype Cycle model, the peak of inflated  expectations quickly follows a technology trigger but is quickly succeeded by the trough of disillusionment and a much slower slope of enlightenment before settling into a plateau of productivity. Much of the initial conversations around AI in higher education extolled the possible benefits that AI might provide—adaptive learning would allow students to move at their own pace and have personalized instruction, targeted student advising could assist students before they encountered problems, algorithms could be used to direct students to fields of study where they would most likely be successful, and sophisticated customer relations management software could help admissions offices target advertising to those students most likely to be interested in and successful at their institution, thus saving money and better assuring student success. The extent to which these promises have been delivered upon, however, are questionable. Research on adaptive learning is still limited and, as a 2016 SRI study found, experiments on its effectiveness are still inconclusive. It’s fair to say that the impacts of AI in higher education have been, as Amara’s Law predicted, overestimated in the short run. And then there is the other side of Amara’s Law to consider—underestimating the effects of AI in the long run.

Although there is no educator equivalent to the Hippocratic Oath, Hippocrates’ warning of primum non nocere (first, do no harm), found in Of the Epidemics is still applicable. Our goal as educators is to support our students and provide them with the tools for success. What happens if, in our haste to help some students succeed, we inadvertently injure others? This scenario isn’t so far-fetched, especially when set against the backdrop of a history or racially motivated educational discrimination. It hasn’t been that long ago that women interested in medicine were routinely advised into nursing programs rather than medical programs or that African Americans interested in science and technology were advised to be teachers in racially segregated schools rather than scientists (assuming they were advised to go to college at all). In all of these cases, though really driven by misogyny and racism, the spoken rationale was often pseudo-scientific—women have different mental abilities than men and can’t understand advanced math and science or African Americans are better suited for manual labor or vocational work rather than theoretical studies.

We often mistakenly assume that science and technology are objective and unbiased. Sadly, there is ample evidence to contradict that belief. In 1996, Batya Friedman and Helen Nissenbaum outlined three types of bias found in AI—preexisting bias, technical bias, and emergent bias—all of which speak to Amara’s concerns over underestimating the long term impact of AI.

  • Preexisting bias: “Preexisting bias has its roots in social institutions, practices, and attitudes. When computer systems embody biases that exist independently, and usually prior to the creation of the system, then the system exemplifies preexisting bias. Preexisting bias can enter a system either through the explicit and conscious efforts of individuals or institutions, or implicitly and unconsciously, even in spite of the best of intentions.”
  • Technical bias: “Technical bias arises from technical constraints or technical considerations.” This can include limitations in hardware and software as well as decontextualized algorithms and incomplete data sets.
  • Emergent bias: “Emergent bias arises in a context of use with real users. This bias typically emerges some time after a design is completed, as a result of changing societal knowledge, population, or cultural values. User interfaces are likely to be particularly prone to emergent bias because interfaces by design seek to reflect the capacities, character, and habits of prospective users.”

It’s not that difficult to imagine how these types of bias could play out in an educational context.

  • Preexisting bias: An institution decides to use adaptive learning in its developmental math course as a way of accelerating student readiness for college level math. Primary instruction takes place via software and instructors are used to provide supplemental instruction as needed. Because primary instruction is computer-based, students who lack basic computer skills or who better learn in a faculty led classroom environment may not do as well. As a result, factors external to the content knowledge may negatively impact learners.
  • Technical bias: Predictive analytics software often draws on historical data sets to develop institutional algorithms. As a result, students who do not fit the model of the “typical” student may be either flagged as at-risk or simply ignored by the algorithm. In other words, incomplete data sets will always produce incomplete results.
  • Emergent bias: An institution purchases predictive analytics software to determine which students are at-risk and should receive early interventions. In an effort to improve student success rates, the institution uses the software to determine at-risk students and then advises those students to drop classes or even leave the institution.

None of these examples are far fetched or extreme. For example, in 2016 the president of Mount Saint Mary’s University proposed using results of a student survey to flag students likely to fail and urge them to drop out. As he put it, ““You just have to drown the bunnies … put a Glock to their heads.” And as Manuela Ekono and Iris Palmer write in New America’s The Promise and Peril of Predictive Analytics in Higher Education, “If the algorithm used to target at-risk student groups is a product of race or socioeconomic status, some students could be unfairly directed to certain types of majors, adding to the unequal opportunity in society. If poor students are told they cannot succeed in STEM majors, for instance, they will be deprived of pursuing some of the most lucrative careers.” Additionally, faculty and staff may unconsciously communicate to students flagged as at-risk that they are likely to fail and, thus, create a self-fulfilling prophecy.

Given these challenges, what should higher education do about AI? Internally, there are several avenues that institutions should explore.

  • Colleges and universities should make sure that any use of AI is well-considered, engages all the impacted stakeholders, and adheres to ethical use standards.
  • We must be transparent on how AI is being used and refuse to use “black box” algorithms which make both inputs and outputs opaque.
  • And, finally, we have to encourage faculty and staff to engage in research that tracks the usage and impact of AI, including developing case studies as well as technical and pedagogical best practices.

Externally, higher education has the ability to significantly shape the development and use of AI.

  • On the most basic level, we can work to improve the diversity of engineers and scientists and, in this way, help limit preexisting bias.
  • Colleges and universities should also actively be engaged in developing cross-disciplinary partnerships that help inform conversations on the general use of AI and its impact on society. These conversations should include developing the workforce and educational skills that will be necessary to thrive in a world full of AI.
  • Scholars also have the opportunity to lead conversations around developing ethical use standards and the impact of AI on human rights. Instead of just focusing on the how of AI, higher education can lead conversations on the why of AI.

As Parker Palmer wrote in To Know As We Are Known, “To teach is to create a space in which the community of truth is practiced… The aims of education [are] knowing, teaching, and learning.” Such an understanding of education is a far cry from the impersonal automated classrooms of the future that Côté and Radebaugh depicted. Education is more than listening to a lecture or pushing buttons, it is ultimately about engaging with a community of learners. AI has the potential to greatly improve the quality of that engagement but only if we maintain a focus on the people who are learning rather than the technology being used to teach them.

 

[1] Popular conversation tends to define AI as artificial intelligence which carries the implication that machines will take over certain functions and replace humans. A more nuanced approach, and one that is better suited for education, is augmented intelligence where machine learning augments the capacity of human users rather than replacing them. In the augmented intelligence model, humans retain primary agency. This post focuses on an examination of augmented intelligence rather than artificial intelligence in higher education.