That’s my current view. So for example, when I ask Copilot a question a question like “what is the carrying capacity of Wales using pre-industrial technology alone?”, it 1) looks up a bunch of articles on the Internet, 2) takes an average, 3) presents a range, 4) does some fairly simple math, 5) presents results in summary with a small table. I certainly could have done all that. Just saved an hour or so not to have to. And of course, if the resulting number fed into some vital policy debate, the entire analysis would need to be reworked by critical humans anyway, just to be sure of the quality of both the research and the calculations. At best, Copilot can bang out rough estimates and first drafts in a big hurry. That’s useful as far as it goes.
Notice also I’m not even tempted to ask AI questions like “should the Welsh press for devolution from the UK?” Maybe “summarize arguments for and against Welsh devolution”, but that’s really just glorified search. The political question itself is none of AI’s business, IMO.
In the context of an upcoming textbook, I drafted the AI chapter. That requires covering the spectrum of opinion from AI skeptics (like Y N Harari) to AI cheerleaders (like Ray Kurzweil) One thing I did was to summarize Vervaeke’s talk on Hobbes-Descartes-Pascal precisely to flag that “intelligence” or “reason” has never achieved any consensus definition. The 17th century debate was never resolved, and if anything, the topic has only complexified more recently. (I’m team Pascal, BTW). So the Turing Test is not well defined - it depends entirely on what one requires for “human-like” behavior.
Although spreading the theoretical options out on the table for students to evaluate themselves is a good practice, at the end of the day, we’re all situated, need to make practical moral judgements, and need to live with the results of whatever bets we place. My current bet is that education needs to lean more into the right-brain, 1) to correct cultural skew of the past 500 years, 2) to facilitate human students to differentiate their contributions from AI, so they will clearly have value to offer beyond whatever the bots spit out. Paradoxically, that means I’m going to be encouraging students to throw AI at everything (to get familiar with it), but class effort will be almost entirely human-to-human interaction. Even if the school itself shuts down in some impending collapse, a radically relational human-to-human education program can still transpire around hearths or campfires or whatever gathering space becomes available. The “school”, as such, is really the people. AI can serve as a sort of text. If AI goes away (due to loss of power, connectivity, access, etc.) text on print can go back to being what it once was.