What should a new university look like and how would we build one?

Rufus and I were having a discussion over email about the (in my view) ongoing collapse of the current university model, much hastened by AI, and how we would go about planning for and building a new model.

The discussion was as much about the how as the what, although of course both are essential.

As to what such a new model would be - it seems obvious to
me that we need to restore the apprenticeship model of education, and
we need to deal with ongoing catastrophe of AI use by students, which
is rapidly leading to a generation of Wall-efied thinkers:

The Myth of Automated Learning

The argument there is the obvious and convincing one - automation can increase efficiency of experts (who are currently driving the
conversation), but will radically halt learning for beginners. Like
students. Who are currently increasingly lost inside AI.

What would a new institution of post-school learning look like? I
think it would have to follow an apprenticeship model, with a faculty
of say 5 people. The point of the training is to give the students
long-lasting habits for careful thought. It would be in-person,
working-day, with classes in the morning, and shared study time in the
afternoon, where the faculty and students would work in the same
place. Flip phones only during working hours. No AI for anything
other than search until the third year, or as modeled by faculty.
Students would collaborate with faculty on projects, but with work
being done in-person, in working hours. Training in data science
coding and statistics, politics, economics, and humanities, with
constant cross-talk between them, modeled by faculty. Difficult
debates would be routine, with disagreement and separation of
evidence and values, again modeled by faculty.

I think some version of that is practical and could start soon. It
seems to me that AI, god bless its cotton socks, is going to have a
massive effect on accelerating the decline of the current model,
because the students are going to leave their courses having learned
very little, and it will start to be obvious to everyone that is the
case. Some students will realize this before they apply, or, if we
are doing a master’s, afterwards. Meanwhile, a) we’ll need students
like this to do sensible work in an AI age, and b) they will radically
outcompete the students who have learned little in their courses.

How to fund it? I had various thoughts. The most obvious of these
is having such a course as an independent business unit inside another
university. With good students, we could quickly start to develop a
reputation for high-quality reviews of evidence, taking a wider view
of evidence, but with solid empirical grounding. Perhaps we could
start consulting.

I think this could work, and I have the time and motivation to do it, but I need other people to bounce ideas off, and to start to build this thing.
That’s how things happen very fast in open-source - you find another
mad person, and ask “how hard can it be?” I haven’t found those
people, as you can see.

On a practical level, I’m sure y’all would agree, insight comes from building
and building comes from insight. As Donald Knuth said - when he gets
stuck on theory he does practice, and when he’s stuck with practice,
he does theory. Meaning only that there’s a limited amount one can
achieve without doing something that will show itself as failing or
succeeding.

2 Likes

Hi @matthewbrett ! If we delve into the history of the university, the original meaning of the term was something like “community of teachers and scholars”. Regenerating and refreshing that model sounds like a worthy project for a Second Renaissance.

My own preferred model for social and educational regeneration is Ikigai-based. You can read more about it here. Facing the Future

This morning I sent my dean a proposal for realigning our school’s guidance program around this model. For example:

  • What do you love? (List of humanities and general education courses to help explore that question)

  • What are you good at? (List of workforce courses to improve answers to that question)

  • What does the world need? (List of social sciences and general education courses to clarify that question)

  • What can you get paid for? (List of workforce courses, practicums, internships, job fairs, etc. to clarify that).

The goal of Ikigai-based studies and discussions is to get students to synthesize cognition, emotion, practice, and action planning. There is plenty of metatheory to suggest why that should be a good thing. I’d been very keen to learn if anyone has a different set of practices to achieve such balance in a quicker or easier way.

Welcome @matthewbrett and this is great question and reflections. Will be responding better when back from retreat.

Thanks for your thoughts. Perhaps this is not what you meant, but I now honestly believe there is no future in reform of the current institutions. I see my link is now behind a paywall, but from that link, here is a quote from a student:

I’ve become lazier. AI makes reading easier, but it slowly causes my brain to lose the ability to think critically or understand every word.

I literally can’t even go 10 seconds without using Chat when I am doing my assignments. I hate what I have become because I know I am learning NOTHING, but I am too far behind now to get by without using it . . . my motivation is gone.

Everyone is doing it.

Meanwhile, the teachers are suffering too:

I’ve been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student. That sure feels like bullshit. I’m going through the motions of teaching. I’m putting a lot of time and emotional effort into it, as well as the intellectual effort, and it’s getting flushed into the void.

I guess those of us who teach are familiar with this, from both sides. I don’t think it can keep going for very long before it collapses, and I don’t think the standard ways of post-school teaching can last.

If that’s true, we need to start rapid iteration to work out what real teaching and learning will look like. It seems to me this will have to start with small, bold experiments - that not much of the work is theory, that most of it is practice.

Then the question is - how to start those experiments?

1 Like

Just start! When it comes to education reform, there is a lot of content here, some of it by me: Bildung - Medium

Let’s just say that no one in the Bildung movement is a big AI fan. Human-to-human is more like it.

When I refer to the original university as a “community of scholars”, that’s how to start over. Start with community. As in, people who have something in common. A few years ago I thought I might return to graduate school, essentially to form relationships with people who share my interests. Then I figured out, between social media (like this forum), YouTube, and a variety of non-profit organizations, one can get a better education than in graduate school, at a much more affordable price. The key to turning Second Renaissance into something like a neo-university is just having a community of scholars who share interests and exchange views on each other’s work. No reason to stand on ceremony! Ivy-covered halls can come later in the process.

When you say “just start” - that was what I was getting at with my emphasis on practice rather than theory. If my goal was to find a group of academics who agree that the current system is irredeemably miserable and broken, and will fail, then I am confident I could do that. In fact, talking to my fellow academics, it would probably be harder to find a group of colleagues who did not think that, outside the senior administration, and likely, even there.

But the hard problem here is the starting. I have found, in my now fairly long career, that there is a very small group of people who not only see the problem, but are intensely and single-mindedly focussed on fixing the problem. Thus far, all the work I have been part of, that has made a difference, has been due to the part-accident of finding those people and working with them, working very hard, and with great commitment, on practical problems.

Following that pattern, if we want to build something, we need people who have an overwhelming need to do. As in - what would it take to start an institution that could start admitting masters students in two years? As in - I will resign from my current job and throw myself at the mercy of this project to make this happen.

Of course, of course, these people are very hard to find. And when those people are found, there are many risks - losing money, failing, just getting nowhere.

1 Like

Just to say - I have found a version of the link I posted before, from before it was behind a paywall. I’ve edited the link in the original post, but posting here too:

In Ikigai terms, the question “what can I get paid for?” does tend to present itself. For any startup venture (educational or otherwise) , passion (“what do I love?”), skills (“what am I good at?”) and difference-making potential ("what does the world need?) are all essential. But then comes that nasty little question about resource requirements.

People start alternative educational programs all the time. For example: https://ecoversities.org/ As with any startup venture, most fail in the first few years, but some may well survive, thrive, and transform educational models in the future.

My personal strategy is to attach myself to going concern startups that have reached enough maturity to have proven basic viability. (2R looks like one of those). Then help them reach the next level for whatever their goals may be. On the cusp of retirement, I don’t need startup funding personally. My “pay” is in the results I wish to see. So as long as costs stay light, I can involve myself with whatever I wish to. So I’m currently involved in supporting roles with startup founders around the world, generally in the educational space. The way to practice that is to just practice - there is no magic formula. Just lots of sweat, and plenty of equity in the future of the planet, equity that will be entirely passed on to future generations.

[quote=“RobertBunge, post:8, topic:449”]

Just to clarify - I recently resigned from one such startup venture - The London Interdisciplinary School.

And - I’m sure that’s obvious - but I have an unfashionably traditional view of effective education. I’ve now become expert in two fields that were developing rapidly - brain imaging and data science - in both cases, I had to learn in an unstructured way, because the fields were new and there were no good courses to learn them at the time. On the other hand, it was and is clear to me that carefully thought out and structured courses could get students a very long way in a short time. I teach - I suppose most of us teach - to make it easier for my imagined younger self to reach good understanding and solid foundations much more quickly than I was able to. I remember my younger self well enough to understand the wisdom of discipline and structure in the learning process, especially in the first few years of post-school education.

So, the fact of being an educational startup is not much information to go on, as to whether I could best devote my time there. As my experience at my previous institution has shown me.

My desire is to teach students to be able to think well, and deeply, in the world that currently exists. Doing that will need skills that we used not to need - such as skills in data analysis and coding - and learning that we have always needed, but are losing, such as history and literature.

So themed universities - such as ecoversities - are not going in the same direction that I am. In fact, if anything, they are going in the opposite direction, in choosing a particular approved way to look at the world as an implicit invitation to entry. I want to rebuild the kind of institution that will teach students to ask hard questions - such as - does the current environmental movement make sense? How does propaganda work to generate widely-held assumptions that are wrong? How should we understand value in a world of competing narratives? Where these questions are open, and where we learn to distinguish pleasing from convincing answers.

As of the advent of ChatGPT, my extra requirement is that the institution understands the profound damage that AI can do to beginner learners.

I would love to learn of institutions that are doing this kind of work - and I have no particular need to be a leader - I want this thing built, I don’t care who builds it.

As a marker, at the moment, the closest thing I know of to what I’m thinking of is the University of Austin, although perhaps that is too engaged in current culture wars for the kind of careful and engaged learning that I want to foster.

I’m curious. What did you love/not love about the London Interdisciplinary School?

To give a little background - I was in the USA for a long time, then came back to teach data science in Birmingham University. That was a shocking experience - because it was clear to me (and to many others) that the heart of the university had long since died, leaving only the appearance of higher education, and some shiny new buildings. I resigned after a few years, expecting not to work in academia again, but then applied to the London Interdisciplinary School (LIS) because it was new, and therefore, I had some hope that it would not decay into the mechanical business-centric husk that had been so apparent at Birmingham.

But the LIS has a serious problem, which is that it is focused on one thing - interdisciplinarity - where the increasing collapse of UK higher education has much wider roots. And it absolutely failed my last requirement, because the LIS as a whole fundamentally failed to see what wreckage AI was going to cause, both to learning, and to traditional continuous assessment. To be fair, they are fully in the mainstream of UK educational institutions in that failure. Perhaps it was just more painful to see it unfold in such a young institution, and where we could have worked out what to do more quickly than would have been possible for older and larger organizations.

On AI and interdisciplinary studies, here is an article I wrote a few months ago: Open Letter to My Students, 2024–25 | by Robert Bunge | Medium

More recently, last week I attended this conference. That was some of the latest thinking about AI, sponsored by a Jesuit University with a deep faculty in business ethics: Conference | Technology Ethics Initiative | Seattle University

On of the speakers at the conference was a PhD student whose first impulse was not to use AI at all. Then advisors suggested a bit different approach: use AI as a research assistant, assuming the you, the human, always know more about the topic at hand than AI does. That’s about how far I use AI as well - to a minor degree in so far as it’s embedded in things like Google search. It certainly does no writing or thinking on my behalf!

As a practical matter, for my CS students to get jobs, they need to get lots of practice with AI and to understand what it really is. Most of my energies lately, by contrast, have gone toward defining practices for humans to stay human and to be sure that humans stay in charge of any AI.

If you haven’t had a chance to read that (initially paywalled) post yet, it has the following:

Thanks to human-factors researchers and the mountain of evidence they’ve compiled on the consequences of automation for workers, we know that one of three things happens when people use a machine to automate a task they would otherwise have done themselves:

  1. Their skill in the activity grows.
  2. Their skill in the activity atrophies.
  3. Their skill in the activity never develops.

Which scenario plays out hinges on the level of mastery a person brings to the job. If a worker has already mastered the activity being automated, the machine can become an aid to further skill development. It takes over a routine but time-consuming task, allowing the person to tackle and master harder challenges. In the hands of an experienced mathematician, for instance, a slide rule or a calculator becomes an intelligence amplifier.
…
Automation is most pernicious in the third scenario: when a machine takes command of a job before the person using the machine has gained any direct experience doing the work. Without experience, without practice, talent is stillborn. That was the story of the “deskilling” phenomenon of the early Industrial Revolution.

The obvious and sensible conclusion is that you need to keep beginners away from AI, but as learners become more competent and experienced, it will be first safer, than necessary, to introduce AI.

Unfortunately, although fairly obvious, I don’t think many institutions have thought this through yet, and at least initially there was an entirely vacuous wave of “we must embrace AI”, which is only slowly receding.

It was revealing how immature the initial reaction was - it seemed clear to me that one should approach AI, as with any new technology or any new medicine, with great care, and one should only introduce such technology into well-established courses when it was clear that it was safe.

I think the student and teacher quotes I posted earlier are pretty typical - at the moment, in current use, AI is not safe for beginners, and indeed, it is very harmful, for the reasons given above.

But - perhaps that is good - perhaps AI will accelerate the insight that was developing slowly. The student deferring to AI for homework and the teacher marking AI homework are both going to see that this process is not worth spending time on - it is - to quote the teacher above - “a bullshit job”. And then, perhaps, both teacher and student will go to look somewhere else for a useful education.

1 Like

In the Seattle area, the big employers - like Amazon and Microsoft - are pushing their employers into AI willy nilly. All this is CEO/CFO-driven, no doubt. But my students are caught in the gears of the machine in any case.

I completely agree that pure human coding exercises should precede any sort of automation. Just as pencil and paper math should precede calculator math. And so on. An analogy I often use is, would you trust a person with a power saw who had yet to master a hand saw? It’s just dumb to imagine skipping a bit of manual training on whatever the skill is would somehow be an efficiency move.

That being said, I’m also pretty sure that foolish educational short-cuts by students embed their own punishing consequences, so I’m not putting a lot of time or energy into AI-policing student productions. I’ve seen AI creeping in - but I figure anyone turning that in without bothering to analyze or test it is going to exposed as lacking skills at a very awkward job-related moment. Any students who pay any attention at all to what I am teaching will voluntarily do things the harder way, because in the end it’s the better way.

Hi there! The frustration you’re describing is the mother of all challenges, but I’m glad that you’ve recognised it as something that is worthy of prioritising. I’d be happy to have a chat about it.

(if it’s not obvious - I’m offering an experiment and let’s see what happens)

Happy to chat at any time. My email is [email protected] .

Just got my fall teaching assignment. One course is Web Application Development with Python. (Django). Based on this 2R thread (and lots of similar elsewhere), this suggests an in-class experiment.

Students will be given the option to use AI to maximum extent for all course assignments, or no AI at all, or somewhere between. There will also be a graded assignment (or a few) in which the requirement will simply be self-disclosure of a) how much AI was used, and b) what were the results? Much of the class discussion will likely center around if AI is truly bringing productivity and/or learning gains.

My best guess going in is that someone completely ignorant of Django will not be able to get much up and running just by pointing AI at the problem. I would likewise guess that once a baseline Django deployment is place, essentially through manual coding and configuring, advanced features may well be added more readily with AI support. Human unit testing and integration testing will still be very important, even (and especially) in heavy-AI cases.

Meanwhile, I plan to supplement all this with lots of coaching and philosophical framing about why staying human and cultivating human skills is generally advisable. Results will be in about 6 months from now.

1 Like

The problem there is that it is fairly easy to imagine a world where one would never practically use a hand-saw, and where using a hand-saw wouldn’t teach you much useful about using a power saw.

My own best attempt is to ask whether you would trust someone to translate a passage of French who could not understand French. They might have a feeling they understand French, because they were used to someone or something else translating for them, but they’d be completely lost trying to translate on their own.

I think you’d conclude that there was an unacceptable risk that they would fail to spot errors, and that you might was well use the AI tools yourself.

I suspect we’re about to put out a generation of coders who don’t have a deep understanding of code, but who a) think they do and b) don’t know enough to know when the AI is making a mistake, or building things in an messy or inefficient way. I suspect too that the result is that we get is a generation of people producing a huge amount of code with many subtle errors, and who are not capable of spotting or fixing those errors.

Put more neatly: AI Coding Assistant

1 Like

There’s an interesting recent reflection on the “hollowing out” of universities, and the reasons for this, at https://www.youtube.com/watch?v=KzLzWjXAEmI

And I just came across this piece, that also argues, forcefully, that AI will mean that most universities have become irrelevant, and are doomed to collapse.

AI is doing to the universities what Gutenberg did to the monasteries

Large Language Models, however, are delivering the killing blow. Just
as the printing press collapsed the cost of reproducing text, AI has
collapsed the cost of producing texts. This is actually worse news for
universities than Gutenberg was the monasteries: movable type made
scriptoria unnecessary, but LLMs haven’t only made universities
obsolete, they’ve made it impossible for universities to fulfill their
function.

(where their function was as a gatekeeper for credentials - and where AI has made it possible to get credentials without learning anything).

1 Like