file-20260107-62-pag525.jpg

A Man on the Inside: Netflix comedy offers a timely defence of higher education

Season 2 of Netflix’s A Man on the Inside finds Charles Nieuwendyk, private investigator and retired engineering professor (played by Ted Danson), undercover at Wheeler College.

The mission: recover the college president’s laptop. This might not seem juicy, but said laptop contains sensitive information about a $400 million donation by a tech multibillionaire, Brad Vinick.

As someone who has lived and studied academic life, I find the series created by Michael Schur (also behind The Good Place starring Ted Danson, among other hit series) is both funny and uncomfortable because it hits close to home.

Budgets trimmed to the bone

The P.I. is thrilled by his university case, calling it something “I can really sink my teeth into.”

Wheeler College, founded in 1883, has seen better days. It is struggling financially and its leadership is unpopular. The board of trustees hired a president who trims department budgets to the bone, cuts student aid and embraces corporate sponsorship — as well as the bonus he receives with every major donation.

These measures are not enough. Enter Vinick.

‘A Man on the Inside’ Season 2 trailer.

Vinick’s secret plan — “Project Aurora” — is to fire half the professors, exclude faculty from decision-making and close what he considers “non-essential departments,” leaving “three tracks of study — biotechnology, economics and computer science to prepare young adults for life in the modern world.”

President Jack Beringer knows Vinick’s intentions but does not want anyone to know he knows. Faculty uprisings would not help his bid for a higher-paying university job in Dallas, where he ate the best steak ever.

Language of efficiency, innovation

Any campus stroll reveals that Wheeler’s “Pepsi T-Mobile Covered Garage brought to you by Sephora” (Episode 4) is only a slight exaggeration.

Vinick’s language of efficiency and innovation dominates in real life. Universities are run increasingly on a corporate model, as numerous studies have demonstrated, including my collaboration with Maggie Berg in our book The Slow Professor: Challenging the Culture of Speed in the Academy.

Budgets and programs are being slashed, and in the guise of economic necessity, principles of higher education are undermined.

Professors also satirized

While Beringer and Vinick are the villains of the piece, there are, of course, some digs at the professors. (I admit we are an easy target).

The musicologist, for example, will abandon any conversation mid-sentence when inspiration hits.

In Episode 4, we see the chair of the English department is a snob about books you can buy at airports.

However, the show resists indulging in nutty, overpaid professor stereotypes because it recognizes, in the words of Dr. Benjamin Cole, head of the English department, “these are not the best of times.” The show focuses on staff and faculty efforts in an era of budget cuts and attacks on what the billionaire investor calls “pointless subjects” like art history and philosophy.

Holly Bodgemark, the provost, is so overworked she swallows nicotine gum (“It works faster if it goes right to the stomach”) and mixes her own “Peptocoffee.”

The musicologist may be flaky, but she buys used instruments out of her own pocket for students who can’t afford them. Money is tight for students. Student Claire Chung works a dozen jobs to pay tuition and housing. “When do you sleep?” Nieuwendyk asks. “In class,” she replies.

Defending higher education

To defend higher education, the show calls in the big guns: Ozymandias, a sonnet by 19th- century Romantic writer Percy Bysshe Shelley. It’s mentioned in one of Cole’s lectures, where he recites some of its lines and comments on its continued relevance: “Money, fame, power do not last. But ideas … can endure.”

Two men in discussion on a bench.
Literature professor Dr. Cole tells his students: ‘ideas … can endure.’
(Netflix)

Published in 1818, Ozymandias speaks of a “traveller from an antique land.” The traveller comes across the remains of a sculpture with an inscription that reads:

“‘My name is Ozymandias, King of Kings,

Look on my Works ye Mighty, and despair!’”

The rest of the sculpture is a “colossal Wreck,” and the king’s boast has dwindled into unintentional irony.

Given that the show is American, the literary allusion might be a veiled reference to the No Kings protests.

Making sense of the present

The series seems to side with philosophers like Martha Nussbaum, who argues that a liberal arts education can help us make sense of the present and read it critically.

Vinick is a modern Ozymandias. He wants to be immortal, literally (he undergoes longevity treatments) and figuratively (he commissions oil portraits of himself). As the professor of fine arts notes in the first episode of Season 2: “Newsflash: the billionaire is a narcissist.”

Not to give away the mystery, but a crisis is averted. Wheeler is safe … for now. It might go under, but, as the provost says, “better to end on our own terms.”

And those terms are: education is not a business; it cannot be reduced to the delivery of quantifiable outcomes. The book What Are Universities For?, by Stefan Collini, professor emeritus of intellectual history and English literature, makes this case in a particularly compelling (and at times laugh-out-loud) way.

Higher education is a public good because it teaches critical thinking and civil debate and prepares engaged citizens.

Community

Good satire like A Man on the Inside points out the problems as well as possible remedies. Vinick mocks the notion of community, but the show values it above all because, without it, resistance is impossible. Wheeler College’s faculty and staff celebrate each other and band together across disciplinary divides.

In the words of the provost in the last episode of the season, they are committed to protecting “community and knowledge for the sake of knowledge.”

Schur’s comedy offers a timely defence of higher education and is notable for bridging the gap between academics and the general public.

The Conversation

Barbara K Seeber received a Social Sciences and Humanities Research Council Institutional Grant at Brock University.

file-20260108-56-3lcwjl.jpg

Financial case for college remains strong, but universities need to add creative thinking to their curriculum

Unemployment rates are lower among people who have a college degree, compared to those with a high school degree. Wong Yu Liang/iStock Images/Getty Images

A college degree was once seen as the golden ticket to landing a well-paying job. But many people are increasingly questioning the value of a four-year degree amid the rising cost of college.

Almost two-thirds of registered voters said in an October 2025 NBC News poll that a four-year college degree isn’t worth the cost – marking an increase from 40% of registered voters who said that college wasn’t worth the cost in June 2013.

Caroline Field Levander, the vice president for global strategy and an English professor at Rice University, argues in her December 2025 book “Invent Ed” that people have lost sight of two factors that made universities great to begin with: invention and creativity.

Amy Lieberman, education editor at The Conversation U.S., spoke with Levander to break down the benefits of going to college and university – and how schools can better demonstrate their enduring value.

How can we measure the value of a college degree?

College graduates earn substantially more than people who do not have a college degree.

The average high school graduate over a 40-year career earns US$1.6 million, according to 2021 findings by the Georgetown University Center on Education and Workforce. The average college graduate, over this same 40-year time frame, earns $2.8 million. That $1.2 million difference amounts to around $30,000 more salary per year.

People who earn a degree more advanced than a bachelor’s, on average, earn $4 million over 30 years, making the lifetime earning difference $2.4 million between these graduates and people with just a high school diploma.

College graduates are also better protected against job loss, and they weather job disruption cycles better than high school graduates.

The unemployment rate for people with a high school degree was 4.2% in 2024, according to the U.S. Bureau of Labor Statistics. By contrast, 2.5% of people with a bachelor’s degree and 2.2% of people with a master’s degree were unemployed in 2024.

Do any of these benefits extend beyond individual students?

In addition to the substantial financial benefits college graduates experience, colleges and universities are major employers in their communities – and not just professors and administrators. Higher education institutions employ every trade and kind of worker, from construction workers to police, to name a few.

Universities are crucial to developing and strengthening the U.S. economy in other ways. The discoveries that faculty and researchers make in laboratories lead to new products, businesses and ideas that drive the U.S. economy and support the country’s financial health.

Researchers at the University of Texas Southwestern did important work in helping to discover statins, while scientists at the University of Pennsylvania developed the mRNA vaccine. The list of inventions that started at universities goes on and on.

Some people are questioning the value of a degree. What role can universities play in reassuring them of their relevance?

Discovery and invention have traditionally been the focus of many graduate programs and faculty research, while undergraduate college educations tend to focus on ensuring that students are able to successfully enter the workforce after graduation.

Undergraduate students need to gain competency in a field in order to contribute to society and advance knowledge.

But I believe universities need to teach something else that is equally valuable: They also need to build creative capacity and an inventive mindset into undergraduate education, as a fundamental return on the investment in education.

Employers report that creativity is the top job skill needed today. The IBM Institute for Business Value, for example, concluded in 2023 that creativity is the must-have skill for employee success in the era of generative AI.

The Harvard Business Review reports that employers are developing short courses aimed to build creative capability in their workers.

A woman with dark hair looks down with various small images around her.
Creativity and innovation are both likely to become increasingly important for young people entering the workplace, especially as AI continues to grow.
Andriy Onufriyenko/iStock/Getty Images

What can faculty and students easily do to encourage creativity and innovation?

Professors can build what I call a “growth mindset” in the classroom by focusing on success over time, rather than the quick correct answer. Faculty members can ask themselves as they go into every class, “Am I encouraging a growth mindset or a fixed mindset in these students?” And they can use that answer to guide how they are teaching.

Students could also consider committing to trying new courses in areas where they haven’t already been successful. They could approach their college experience with the idea that grades aren’t the only marker of success. And I think they could benefit from developing thoughtful ways to describe their journey to future employers. Simple practices like keeping a creativity notebook where they record the newest ideas they have, among many others that I describe in my new book, will help.

And university leaders need to open the aperture of how we define our own success and our university’s success so that it includes creative capability building as part of the undergraduate curriculum.

The Conversation

Caroline Levander does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

file-20251204-56-npm444.jpg

New industry standards and tech advances make pre-owned electronics a viable holiday gift option

It's easier than ever to repair or recycle electronic devices. Elisa Schu/picture alliance via Getty Images

Electronic gifts are very popular, and in recent years, retailers have been offering significant discounts on smartphones, e-readers and other electronics labeled as “pre-owned.” Research I have co-led finds that these pre-owned options are becoming increasingly viable, thanks in part to laws and policies that encourage recycling and reuse of devices that might previously have been thrown away.

Amazon, Walmart and Best Buy have dedicated pages on their websites for pre-owned devices. Manufacturers like Apple and Dell, as well as mobile service providers like AT&T and Verizon, offer their own options for customers to buy used items. Their sales rely on the availability of a large volume of used products, which are supplied by the emergence of an entire line of businesses that process used, discarded or returned electronics.

Those developments are some of the results of widespread innovations across the electronics industry that supply chain researcher Suresh Muthulingam and I have linked to California’s Electronic Waste Recycling Act, passed in 2003.

Recycling innovation

Originally intended to reduce the amount of electronic waste flowing into the state’s landfills, California’s law did far more, unleashing a wave of innovation, our analysis found.

We analyzed the patent-filing activity of hundreds of electronics firms over a 17-year time span from 1996 to 2012. We found that the passage of California’s law not only prompted electronics manufacturers to engage in sustainability-focused innovation, but it also sparked a surge in general innovation around products, processes and techniques.

Faced with new regulations, electronics manufacturers and suppliers didn’t just make small adjustments, such as tweaking their packaging to ensure compliance. They fundamentally rethought their design and manufacturing processes, to create products that use recycled materials and that are easily recyclable themselves.

For example, Samsung’s Galaxy S25 smartphone is a new product that, when released in May 2025, was made of eight different recycled materials, including aluminum, neodymium, steel, plastics and fiber.

Combined with advanced recycling technologies and processes, these materials can be recovered and reused several times in new devices and products. For example, Apple invented the Daisy Robot, which disassembles old iPhones in a matter of seconds and recovers a variety of precious metals, including copper and gold. These materials, which would otherwise have to be mined from rock, are reused in Apple’s manufacturing process for new iPhones and iPads.

How do consumers benefit?

In the past two decades, 25 U.S. states and Washington D.C. have passed laws requiring electronics recycling and refurbishing, the process of restoring a pre-owned electronic device so that it can function like new.

The establishment of industry guidelines and standards also means that all pre-owned devices are thoroughly tested for functionality and cosmetic appearance before resale.

Companies’ deeper engagement with innovation appears to have created organizational momentum that carried over into other areas of product development. For example, in our study, we found that the passage of California’s law directly resulted in a flurry of patents related to semiconductor materials, data storage and battery technology, among others. These scientific advances have made devices more durable, repairable and recyclable.

For the average consumer, the recycling laws and the resulting industry responses mean used electronics are available with similar reliability, warranties and return policies as new devices – and at prices as much as 50% lower.

The Conversation

Suvrat Dhanorkar does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

file-20251208-56-jspdt6.jpg

NZ needs more entrepreneurs. Will its new tertiary strategy reward real risk takers?

Getty Images

The government’s recently released Tertiary Education Strategy 2025–2030 signals a shift towards harnessing the sector to address New Zealand’s long-standing productivity issues. But the strategy and its goals aren’t necessarily aligned.

Universities and polytechnics are now expected to promote innovation, accelerate commercialisation and build significantly stronger entrepreneurial capabilities. Measuring those things will be the challenge.

The strategy identifies “particular gaps in market-driven entrepreneurial skills” and instructs universities to expand entrepreneurial education, especially for graduate researchers.

It also acknowledges that more people will build careers through self-employment, freelancing or portfolio work.

In parallel, a new national intellectual property policy gives academic staff the first right to commercialise government-funded research, signalling a stronger expectation that universities will generate new ventures and technologies.

The country wants more innovators, founders and risk-takers. Yet several elements of the strategy, especially the way performance may be assessed, risk unintentionally discouraging the very entrepreneurial pathways it aims to promote.

The wrong metrics?

Graduate earnings are one example. Early-career income is widely used internationally as a marker of labour-market relevance, and the strategy treats it as a key success measure. But entrepreneurship rarely begins with high or stable income.

Founders typically experience several years of irregular or low earnings before ventures become viable.

When systems use earnings as a key indicator, the fear is universities will shift focus toward producing graduates for established, well-paid sectors, and away from entrepreneurial endeavours.

But earnings are only one of several tensions.

The strategy’s strong emphasis on labour-market alignment, employer co-design and responsiveness to current skill shortages can tilt institutions toward preparing graduates for today’s jobs rather than tomorrow’s industries.

Many entrepreneurial opportunities emerge in sectors too new to appear in occupational forecasts, from synthetic biology and climate technologies to AI and autonomous systems.

Innovation depends on experimentation and exploration, not simply meeting existing demand. Students will become innovators when exposed to new knowledge, not legacy skills.

Establishment versus experiment

The strategy highlights efficiency, tighter accountability, and improved retention and completion rates. These metrics reward predictable, linear progression.

But entrepreneurial careers are often nonlinear: students may take breaks to build prototypes or pursue opportunities, and researchers may divide their time between academic work and emerging ventures.

Under an overly prescriptive framework, such behaviour can appear as inefficiency rather than evidence of ambition.

Even the strategy’s treatment of vocational and foundation learning reinforces traditional employment pathways. It emphasises work-based training and immediate workforce attachment.

Yet for many, including Māori, Pacific and regional communities, micro-enterprise, social entrepreneurship and locally-driven innovation are vital tools for economic resilience. A narrow employment lens risks sidelining these and other forms of entrepreneurial value creation.

Stronger industry involvement presents another challenge. While collaboration with employers is essential, those invited to shape curricula are typically large, established firms.

Their priorities differ from those of emerging industries and new ventures. If incumbent voices dominate programme design, the system may become less open to disruption, experimentation and the needs of smaller firms and emerging industries.

What other countries have learned

Several countries, including the United Kingdom, have already confronted these tensions. The Knowledge Exchange Framework, for example, assesses universities on employment outcomes as well as on commercialisation, licensing, community enterprise and research partnerships.

The UK-based Institute for Fiscal Studies suggests assessing medium-term trajectories rather than early-career earnings by using the highest earnings of graduates three to five years after graduation, taking into account prior attainment, demographic characteristics and subject studied.

This demonstrates the potential complexity of trying to get such measures right.

The New Zealand strategy identifies the metrics, but not their detailed definition. The details will matter if we truly want to encourage more innovation and entrepreneurship – not just more students graduating on time, hoping to find jobs.

The strategy does send a strong and welcome signal that innovation and entrepreneurial capability are essential to the country’s future. Its emphasis on commercialisation, creativity and adaptability is aligned with international evidence on what drives productivity in modern economies.

However, as the next step, we must ensure the strategy’s performance measures align with its ambition.

Entrepreneurship rarely looks like a high salary, a tidy CV, or even timely degree completion. It does look like risk-taking, refining and long-term value creation.

It is messy and takes time. If tertiary institutions are judged primarily on short-term, conventional indicators of success, they may be pushed to prioritise safer pathways at the expense of innovation.

If that happens, the system risks promoting entrepreneurship in theory, while constraining it in practice.

The Conversation

Rod McNaughton does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

From Stuttgart’s first industrial revolution to Dubai’s fifth – the need for research to connect outside the academy

In the late 19th century, Stuttgart was booming. The southern German city was famously the cradle of an emerging automobile sector and had already established itself as an industrial powerhouse and centre for toolmaking, mechanical engineering and textiles. Rail connections in the Baden-Württemberg region accelerated development, transported workers and spread wealth.

One might think, then, that an obvious place for the nascent railways to reached out to would have been the historic university town of Tübingen, about 20 miles from Stuttgart. No so, Tilman Wörtz of the university’s communications department informed me, on a recent visit. In fact, explained Wörtz, an accomplished journalist, the story goes that the academic grandees of the era resisted a connection with the emerging financial and industrial powerhouse, perhaps regarding it as somewhat uncouth and vulgar to distract from deep cultural and scientific considerations to engage with the forces of commerce. So for a long time, the proposed railroad hit the buffers.

Fortunately today, thanks to the efforts of university leaders, the institution strives to connect, both with industry and the wider community. There is now a railway station, and I was thrilled to speak with a number of academics about relaying their research and knowledge to non-academic readers. Indeed, this fascinating read on rapper Haftbefehl, who is the subject of a Netflix documentary gripping Germany has already come out of the sessions, and do stay tuned in the coming weeks and months for more from the University of Tübingen, which was founded in 1477 and is now the first German member institution of The Conversation.

Fast forward a week, and I found myself in the eye of what some cast as a fourth or even fifth industrial revolution, in Dubai, incorporating AI, nanobiology, and bioengineering. The city is pitching itself as being at the heart of, and a driving force in, this new era of change, which sees civic government enabling human and technological collaboration tackling societal issues and powering growth.

For more than a decade, what is now called Protoypes For Humanity has been an exhibition at the heart of this city’s dash for development, powering projects that bring the prospect of solutions to challenges in the environment, energy, health, technology and other spheres.

When I attended Prototypes a year ago, it was still largely a showcase for PhD candidates’ projects from some of the world’s leading universities, many of which are members of The Conversation. In the last 12 months, however, a new element has been developed, under the guidance of Naren Barfield, former Provost and Deputy Vice Chancellor of the UK’s Royal College of Art. This sees senior academics come to the city to deliver papers drawing on key aspects of their research.

Full transparency, I served on the selection panel Professor Barfield designed to finalise the programme and The Conversation was a media partner for the 2025 Prototypes event.

The themes for the year were as follows:

  • Wellbeing and health futures
  • Sustainable and resilient infrastructure
  • Artificial intelligence and augmented intelligence
  • Environmental sustainability and climate action
  • Socio-economic empowerment and innovation
  • Open and speculative categories

Following short paper presentations in the Socio-Economic Empowerment category, Barfield explained the thinking behind the new element of Prototypes and the opportunity for researchers:

We are bringing together some of the world’s sharpest minds and most innovative researchers to tackle challenges faced in different parts of the planet. Dubai and this initiative provide a unique chance to generate ideas across a range of academic disciplines that might not otherwise collaborate in such an impactful way.

The Prototypes for Humanity initiative and the relatively new Professors’ Programme has a proven track record of connecting academia with policymakers, industry, and the public in a way often described elsewhere as aspirational. Here, it is actually happening.

The reference to industry struck a chord, perhaps given that I’d so recently heard that story of detachment from 19th century Stuttgart, but also because it’s a grumble I regularly encounter across the world when it comes to academia and its engagement (or lack of) beyond the university sector today.

At the conference venue, in the Emirates Towers of Dubai International Financial District, Tadeu Baldani Caravieri, director of Prototypes for Humanity, discussed the thinking behind the project and potential routes forward.

At Protoypes we’ve seen how researchers can directly drive innovation in partnership with industry and, in the case of Dubai, with the city government as a facilitator.

This has been possible thanks to some of the advantages of this state and region. But these are solutions that can, and do present wider benefits – in some cases globally relevant solutions solutions.

He later added:

This edition [of Prototypes] helped to confirm fundamental assumptions for the space we operate in, i.e. creating bridges between academic ingenuity and real-world needs. The main one is that, although there is sometimes a disconnect between university innovation capabilities and industry needs, there is genuine interest, across all of the parts in this equation, to overcome obstacles and do more. We have enabled and witnessed very promising and results-oriented conversations between academia and potential partners, from PhDs and private sector discussing pilots in applied robotics, to professors supporting a humanitarian agency to rethink aid allocation systems, to multinationals looking to fuel their R&D roadmaps.

Dubai is an excellent incubator for these bridges we are building but, in keeping with the city’s outlook and spirit, we want to enable impact across the world – so it’s just natural that, in the future, we hope to open structured avenues for multi-city collaborations, where local ecosystems complement each other’s strengths.

Prototypes’ community brings in research talent from more than 800 universities around the world, including many academics who have also engaged with The Conversation. For instance, Jeremy Howick, of the University of Leicester, presented on empathy in healthcare in the age of AI, and has written this account. Further articles based on projects that exhibited and on the professors’ papers will be published on The Conversation and will be accessible via this link.

Stay tuned to read more on critical and diverse research relating to subjects such as monitoring and diagnosing pre-eclampsia (Patricia Maguire, University College Dublin, using seaweed to create a sustainable packaging alternative (Austeja Platukyte, Vilnius Academy of Arts ) and the emergent Internet of Beings (Francesco Grillo, Bocconi University, Milan).

The Conversation

file-20251121-56-gsvi04.jpg

Colleges teach the most valuable career skills when they don’t stick narrowly to preprofessional education

Tracking graduates’ earnings is just one way to measure the benefit of higher education. iStock/Getty Images Plus

Across state legislatures and in Congress, debates are intensifying about the value of funding certain college degree programs – and higher education, more broadly.

The growing popularity of professional graduate degrees over the past several decades – including programs in business administration and engineering management – has reshaped the economics of higher education. Unlike traditional academic graduate programs, which are often centered on research and scholarship, these professionally oriented degrees are designed primarily for workforce advancement and typically charge much higher tuition.

These programs are often expensive for students and are sometimes described as cash-cow degrees for colleges and universities, because the tuition revenue far exceeds the instructional costs.

Some universities and colleges also leverage their brands to offer online, executive or certificate-based versions of these programs, attracting many students from the U.S. and abroad who pay the full tuition. This steady revenue helps universities subsidize tuition for other students who cannot pay the full rate, among other things.

Yet a quiet tension underlies this evolution in higher education – the widening divide between practical, technical training and a comprehensive education that perhaps is more likely to encourage students to inquire, reflect and innovate as they learn.

An overlooked factor

Some states, including Texas, track salary data for graduates of every program to measure worth through short-term earnings. This approach may strike many students and their families as useful, but I believe it overlooks a part of what makes higher education valuable.

A healthy higher education system depends not only on producing employable graduates but also on cultivating citizens and leaders who can interpret uncertainty, question assumptions and connect ideas across disciplines.

When assessing disciplines such as English, philosophy, history and world languages, I think that we should acknowledge their contributions to critical thought, communication and ethical reasoning.

These academic disciplines encourage students to synthesize ideas, construct arguments and engage in meaningful debate. Some law schools often draw their strongest students from these backgrounds because they nurture analytical and rhetorical skills essential for navigating complex civic and legal issues.

Historically, poets and writers have often been among the first to be silenced by authoritarian regimes. It’s a reminder of the societal power of inquiry and expression that I believe higher education should protect.

A group of young people wear white jackets and stand around a dummy dressed with a pink blanket over it in a hospital bed.
Undergraduate students who want to become doctors or work in other specialized fields are often encouraged to take only classes that connect with their long-term career trajectory.
Glenn Beil/Florida A&M University via Getty Images

Why students stay on narrow professional paths

Students entering college today face significant pressure to choose what they might see as safe majors that will result in a well-paying career. For aspiring physicians and engineers, the path is often scripted early by steering them toward physical and biosciences. High test scores, internships and other stepping stones are treated as nonnegotiable. Parents and peers can reinforce this mindset.

Most colleges and universities do not reward a future medical student who wants to major in comparative literature, or an engineering student who is spending time on philosophy.

Students’ majors also typically place course requirements on them, in addition to a school’s general course requirements. This often does not leave a lot of room for students to experiment with different classes, especially if they are pursuing vocationally focused majors, such as engineering.

As a result, I’ve seen many students trade curiosity for credentialing, believing that professional identity must come before intellectual exploration.

As someone who began my education in psychology and later transitioned into engineering, I have seen how different intellectual traditions approach the same human questions. Psychology teaches people to observe behavior and design experiments. Engineering trains students to model systems and optimize performance.

When combined, they help reveal how humans interact with technology and how technological solutions reshape human behavior.

In my view, these are questions neither field can answer alone.

Initiative is the missing ingredient

One of the most important and often overlooked ingredients in thriving high tech, medical and business environments is initiative. I believe students in the humanities routinely practice taking initiative by framing questions, interpreting incomplete information and proposing original arguments. These skills are crucial for scientific or business innovation, but they are often not emphasized in structured science, technology, engineering and mathematics – or STEM – coursework.

Initiative involves the willingness to move first and to see around corners, defining the next what-if, rallying others and building something meaningful even when the path is uncertain.

To help my engineering students practice taking initiative, I often give them deliberately vague instructions – something they rarely experience in their coursework. Many students, even highly capable ones, hesitate to take initiative because their schooling experience has largely rewarded caution and compliance over exploration. They wait for clarity or for permission – not because they lack ability, but because they are afraid to be wrong.

Yet in business, research labs, design studios, hospitals and engineering firms, initiative is the quality employers most urgently need and cannot easily teach. Broader educational approaches help cultivate this confidence by encouraging students to interpret ambiguity rather than avoid it.

How teaching can evolve

Helping all students develop a sense of initiative and innovation requires university leaders to rethink what success looks like.

Universities can begin with achievable steps, such as rewarding cross-disciplinary teaching and joint appointments in promotion and tenure criteria.

At the University of Iowa’s Driving Safety Research Institute, where our teams blend engineering, medicine, public health and psychology, students quickly learn that a safe automated vehicle is not just a technical system but also a behavioral one. Understanding how human drivers respond to automation is as important as the algorithms that govern the vehicle.

Other institutions are modeling this approach of integrating social, behavioral and physical sciences.

Olin College of Engineering, a school in Needham, Massachusetts, builds every project around both technical feasibility and human context. Courses are often co-taught by humanities and engineering professors, and projects require students to articulate not only what they built but why it matters.

Still, integrating liberal and technical education is difficult in practice. Professional curricula often overflow with accreditation requirements. Faculty incentives reward specialization more than collaboration. Students and parents, anxious about debt and job security, hesitate to spend credits outside of a student’s major.

Rethinking what success means

I believe that higher education’s purpose is not to produce uniform workers but adaptable thinkers.

It might not be productive to center conversations about defending the liberal arts or glorifying STEM. Rather, I think that people’s focus should be on recognizing that each field is incomplete without the other.

Education for a complex world must cultivate depth, initiative and perspective. When students connect disciplines, question assumptions and act with purpose, they are prepared not only for their first job but for a lifetime of learning and leadership.

The Conversation

Daniel V. McGehee does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

file-20251117-56-svciav.jpg

Australian businesses have actually been slow to adopt AI, survey finds

Burst/Unsplash

Over the past few years, we have repeatedly been told artificial intelligence (AI) is coming for our jobs.

In May last year, the International Monetary Fund warned the technology was hitting labour markets like a “tsunami”. Many of the tech giants behind the technology have been making their own confident predictions about which jobs will be wiped out first.

Yet inside many Australian firms, the reality is much less dramatic.

Last week, the Reserve Bank of Australia released the findings from a 2025 survey of 100 medium and large-sized firms, which sought to understand how technology investments have been affecting the way businesses run. It found enterprise-wide AI transformation was the exception rather than the norm.

This presents a strange mismatch: a loud global story about an AI “jobpocalypse”, and a much quieter story inside firms about experiments, pilots and a lot of waiting around for real productivity gains to show up. The Reserve Bank’s report helps explain why.

Laying the groundwork

Australian firms haven’t been ignoring AI and technology. The report notes that over the past decade, total IT investment has grown by almost 80%, faster than other types of capital investment.

Much of that has gone into software, cybersecurity, cloud and upgrading internal systems such as customer relationship management and enterprise resource planning platforms.

While these investments are mostly about keeping the lights on and do not deliver big productivity pay offs on their own, they are essential groundwork to make firms’ systems and processes ready for AI.

‘Minimal’ use of AI

AI is now starting to move up the priority list. About two-thirds of firms surveyed said they have adopted AI “in some form”.

But for the largest group – representing nearly 40% of all respondents – this use was still “minimal”.

Man typing on laptop, screen open to ChatGPT website
Many firms reported only ‘minimal’ use of AI tools.
Viralyft/Pexels

The most common use cases were tasks such as summarising emails or drafting text using off-the-shelf-products like Microsoft Copilot or ChatGPT.

Just over 20% of all firms reported “moderate” adoption, using AI to assist with tasks such as demand forecasting or inventory management.

And a small frontier group – less than 10% of all firms – said they had embedded AI into more advanced processes such as fraud detection.

Impact on jobs

Asked about jobs and the future, companies said AI and automation would likely save some labour, particularly on traditionally time-consuming tasks. But they didn’t expect it to cause catastrophic job losses.

Around half said they expected AI and machine learning would lead to a modest reduction in headcount over the next three years.

Firms planning to reduce their headcount expected to do so through natural attrition (employees resigning or retiring), lower intake of new staff, and redundancies.

Routine finance, administration and contact centre work were seen as most at risk of being automated. But firms also said AI would likely create demand for new roles related to the technology – such as in cybersecurity and redesigning processes.

Why are Australian firms so slow?

The Reserve Bank’s findings align with other reports showing Australia as a cautious adopter of AI when compared, for example, to the United States.

Global report cards on AI adoption and innovation more broadly consistently place Australia behind many other advanced economies.

A few key themes stand out from the Reserve Bank’s survey and other international evidence.

First, much of Australia’s recent technology investment has gone into cybersecurity, compliance, legacy system upgrades, data quality improvements and cloud migration. This is a necessary first step before AI investments.

Second, many firms have been struggling to hire the skills needed to drive AI transformation, such as data engineers and data scientists.

And third, Australia’s business culture is cautious, with low trust and high levels of concern about AI. Adding to this, individuals’ “shadow” use of AI tools (without telling their bosses) can mask the true extent of the technology’s adoption.

Do Australian firms need more support?

If Australian businesses want to move beyond “ChatGPT for emails”, evidence points to several practical steps they and governments can make.

Better engagement of company boards is crucial for moving beyond shallow pilots of digital technology, especially when workers distrust AI and directors see it as a risk. Providing evidence of more successful use cases is an important part of this shift.

Australia also needs to invest in lifting AI skills across the workforce, with a particular focus on reskilling workers and preparing for the likely decline in entry-level jobs for young people.

The report indicates firms see the uncertain regulatory environment as a major barrier. The goal should be to have clear, risk-based rules that make safe experimentation easier.

Without these, firms are effectively told two things at once: “move fast to stay competitive” and “don’t you dare breach privacy, copyright or ethics”.




Read more:
AI ‘workslop’ is creating unnecessary extra work. Here’s how we can stop it


Brace for a bumpy ride

It appears we may not yet be racing toward a world without workers, but rather a messy and uneven transition.

The number of AI firms in Australia has grown substantially, and there is clear momentum and optimism.

For now, however, many businesses are still working out how to make AI useful, embed it into workflows and manage its risks. Their biggest complaints are about ambiguous regulation and skills shortages – not having too many humans.

The Conversation

Stan Karanasios does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

file-20251106-64-pgubjv.jpg

Trump’s tariffs threaten the future of innovation – and UK tech could be collateral damage

Gorodenkoff/Shutterstock

US president Donald Trump’s 15% baseline tariffs on EU imports may read like a throwback to old-school protectionism, designed to safeguard American jobs and manufacturing. But in today’s globalised and digitally driven economy, the risk isn’t just to steel or car factories, it’s to innovation itself.

The world’s most advanced technologies rely on complex, deeply integrated supply chains. Evidence from 2023 shows that even temporary US tariff shocks disrupted relationships between firms. And these tariffs won’t just hit the EU. They will disrupt the high-value tech ecosystems of partners like the UK – especially firms contributing to artificial intelligence (AI), semiconductor design and cybersecurity.

These industries underpin national resilience, data security and the competitiveness of advanced economies. For the UK, which often positions itself as a global innovation hub post-Brexit, the fallout could be significant.

Take ARM Holdings, the Cambridge-based semiconductor giant whose chip designs power 99% of the world’s smartphones and an increasing share of AI infrastructure.

ARM doesn’t manufacture chips itself. Instead, it licenses its architecture to firms like Apple, Nvidia and Qualcomm. That makes it a prime example of the UK’s value in the global innovation chain: high intellectual property (IP), low carbon footprint, huge reach.

ARM’s position as a vital link in the supply chain underlines another point. Trade policy aimed at traditional manufacturing sectors can inadvertently destabilise tech-intensive, IP-led sectors like semiconductors and software. This is echoed in research examining global tariff spillovers on tech competitiveness.

If tariffs are applied to components or design work linked to traded goods that cross EU or UK borders en route to US manufacturers, it introduces a layer of risk and cost to innovative firms and their global partners.

Even if a company’s work isn’t directly taxed, the uncertainty and red tape may make US firms think twice about sourcing from outside US jurisdictions. While Trump might present that as a victory for American manufacturing, in reality it could raise costs for US producers, damage innovation and make US firms less competitive in the industries he aims to protect.

It’s not just the giants at risk. In the UK, Cambridge’s wider tech cluster, sometimes called “Silicon Fen”, is home to dozens of ambitious AI firms. With operations spanning the UK, EU and US, companies like this depend on fast, flexible and trusted international partnerships to develop, deploy and refine their products. Tariff-related disruptions make collaboration harder at a time when speed is a competitive advantage.

This is not hypothetical. Tariffs reduce access to large markets – and when markets shrink, firms reduce investment in research and innovation.

What Trump gets wrong

Trump’s broader narrative suggests tariffs can bring back jobs and restore industrial power to the US. But innovation doesn’t work like that. A semiconductor isn’t made in one place. A cybersecurity system isn’t built by a single team. These are networked, iterative processes, involving researchers, suppliers, data centres and talent pools across continents. Disrupt that flow and you slow progress.

The UK is especially exposed because of its unique post-Brexit positioning. It trades independently from the EU but is still tightly intertwined with it, particularly in tech sectors.

Many UK firms use EU distribution centres to reach the US market or collaborate with EU partners on joint projects involving data, hardware or software This reflects the fact that the UK remains tightly integrated into European supply and value chains – exporting £358 billion of goods and services to the EU in 2024 alone. Tariffs targeting the EU could easily catch UK-originated components or design work as collateral damage.

Modelling has shown that Trump’s proposed tariffs could reduce EU-US trade volumes across multiple sectors, particularly in tech, where integrated production routes are standard.

Small and medium-sized enterprises and startups may find themselves most vulnerable. These firms typically can’t absorb sudden cost increases or legal complexities. Nor can they easily switch suppliers or reroute through different customs zones.

If you’re an early-stage AI company relying on a specific chip from Germany and a US cloud partner to train your model, a 15% tariff adds months of delays and thousands of pounds in costs, just to maintain the status quo.

From a policy perspective, the impact goes deeper. The UK government has championed sectors like AI, fintech and clean tech as pillars of economic growth. But these industries are only as strong as the networks that sustain them. If global fragmentation accelerates, the UK risks losing its role as a bridge between the US and the EU.

Meanwhile, countries like China continue to invest heavily in consolidating their innovation supply chains, from chip manufacturing to AI research, particularly in efforts to secure domestic control over advanced technologies and semiconductors. This is something that the US and EU have only recently begun to coordinate on.

In the short term, Trump’s tariff strategy may boost US customs revenue, which is up US$50 billion (£38 billion) a month by some estimates.

But this is not “free money”. These revenues are largely absorbed by businesses and ultimately passed on to consumers through higher prices, or to smaller suppliers through squeezed profit margins.

More fundamentally, it represents a belief that economic strength comes from protection rather than connection. But innovation has never worked that way. It thrives on collaboration, trust and scale. Tariffs may be politically effective, but economically they are the equivalent of building firewalls between teams that are supposed to be co-writing the future.

As the UK charts its post-Brexit global role, aligning itself with open, innovation-driven economies should be a priority. That means standing up for the integrity of global tech supply chains and recognising that disruption to one part of the system can reverberate far beyond its intended target.

The Conversation

Matthew Allen is affiliated with The Conservative Party as a party member. I am not a councillor or an MP. I am also not active in any campaigning.

file-20251105-56-z6vgyb.jpg

New technologies like AI come with big claims – borrowing the scientific concept of validity can help cut through the hype

Closely examining the claims companies make about a product can help you separate hype from reality. Flavio Coelho/Moment via Getty Images

Technological innovations can seem relentless. In computing, some have proclaimed that “a year in machine learning is a century in any other field.” But how do you know whether those advancements are hype or reality?

Failures quickly multiply when there’s a deluge of new technology, especially when these developments haven’t been properly tested or fully understood. Even technological innovations from trusted labs and organizations sometimes result in spectacular failures. Think of IBM Watson, an AI program the company hailed as a revolutionary tool for cancer treatment in 2011. However, rather than evaluating the tool based on patient outcomes, IBM used less relevant measures – possibly even irrelevant ones, such as expert ratings rather than patient outcomes. As a result, IBM Watson not only failed to offer doctors reliable and innovative treatment recommendations, it also suggested harmful ones.

When ChatGPT was released in November 2022, interest in AI expanded rapidly across industry and in science alongside ballooning claims of its efficacy. But as the vast majority of companies are seeing their attempts at incorporating generative AI fail, questions about whether the technology does what developers promised are coming to the fore.

Black screen with IBM Watson logo on a Jeopardy stand with $1,200 stood between two contestants with $0 each
IBM Watson wowed on Jeopardy, but not in the clinic.
AP Photo/Seth Wenig

In a world of rapid technological change, a pressing question arises: How can people determine whether a new technological marvel genuinely works and is safe to use?

Borrowing from the language of science, this question is really about validity – that is, the soundness, trustworthiness and dependability of a claim. Validity is the ultimate verdict of whether a scientific claim accurately reflects reality. Think of it as quality control for science: It helps researchers know whether a medication really cures a disease, a health-tracking app truly improves fitness, or a model of a black hole genuinely describes how it behaves in space.

How to evaluate validity for new technologies and innovations has been unclear, in part because science has mostly focused on validating claims about the natural world.

In our work as researchers who study how to evaluate science across disciplines, we developed a framework to assess the validity of any design, be it a new technology or policy. We believe setting clear and consistent standards for validity and learning how to assess it can empower people to make informed decisions about technology – and determine whether a new technology will truly deliver on its promise.

Validity is the bedrock of knowledge

Historically, validity was primarily concerned with ensuring the precision of scientific measurements, such as whether a thermometer correctly measures temperature or a psychological test accurately assesses anxiety. Over time, it became clear that there is more than just one kind of validity.

Different scientific fields have their own ways of evaluating validity. Engineers test new designs against safety and performance standards. Medical researchers use controlled experiments to verify treatments are more effective than existing options.

Researchers across fields use different types of validity, depending on the kind of claim they’re making.

Internal validity asks whether the relationship between two variables is truly causal. A medical researcher, for instance, might run a randomized controlled trial to be sure that a new drug led patients to recover rather than some other factor such as the placebo effect.

External validity is about generalization – whether those results would still hold outside the lab or in a broader or different population. An example of low external validity is how many early studies that work in mice don’t always translate to people.

Construct validity, on the other hand, is about meaning. Psychologists and social scientists rely on it when they ask whether a test or survey really captures the idea it’s supposed to measure. Does a grit scale actually reflect perseverance or just stubbornness?

Finally, ecological validity asks whether something works in the real world rather than just under ideal lab conditions. A behavioral model or AI system might perform brilliantly in simulation but fail once human behavior, noisy data or institutional complexity enter the picture.

Across all these types of validity, the goal is the same: ensuring that scientific tools – from lab experiments to algorithms – connect faithfully to the reality they aim to explain.

Evaluating technology claims

We developed a method to help researchers across disciplines clearly test the reliability and effectiveness of their inventions and theories. The design science validity framework identifies three critical kinds of claims researchers usually make about the utility of a technology, innovation, theory, model or method.

First, a criterion claim asserts that a discovery delivers beneficial outcomes, typically by outperforming current standards. These claims justify the technology’s utility by showing clear advantages over existing alternatives.

For example, developers of generative AI models such as ChatGPT may see higher engagement with the technology the more it flatters and agrees with the user. As a result, they may program the technology to be more affirming – a feature called sycophancy – in order to increase user retention. The AI models meet the criterion claim of users considering them more flattering than talking to people. However, this does little to improve the technology’s efficacy in tasks such as helping resolve mental health issues or relationship problems.

AI sycophancy can lead users to break relationships rather than repair them.

Second, a causal claim addresses how specific components or features of a technology directly contribute to its success or failure. In other words, it is a claim that shows researchers know what makes a technology effective and exactly why it works.

Looking at AI models and excessive flattery, researchers found that interacting with more sycophantic models reduced users’ willingness to repair interpersonal conflict and increased their conviction of being in the right. The causal claim here is that the AI feature of sycophancy reduces a user’s desire to repair conflict.

Third, a context claim specifies where and under what conditions a technology is expected to function effectively. These claims explore whether the benefits of a technology or system generalize beyond the lab and can reach other populations and settings.

In the same study, researchers examined how excessive flattery affected user actions in other datasets, including the “Am I the Asshole” community on Reddit. They found that AI models were more affirming of user decisions than people were, even when the user was describing manipulative or harmful behavior. This supports the context claim that sycophantic behavior from an AI model applies across different conversational contexts and populations.

Measuring validity as a consumer

Understanding the validity of scientific innovations and consumer technologies is critical for scientists and the general public. For scientists, it’s a road map to ensure their inventions are rigorously evaluated. And for the public, it means knowing that the tools and systems they depend on – such as health apps, medications and financial platforms – are truly safe, effective and beneficial.

Here’s how you can use validity to understand the scientific and technological innovations happening around you.

Because it is difficult to compare every feature of two technologies against each other, focus on which features you value most from a technology or model. For example, do you prefer a chatbot to be accurate or better for privacy? Examine claims for it in that area, and check that it is as good as claimed.

Consider not only the types of claims made for a technology but also which claims are not made. For example, does a chatbot company address bias in its model? It’s your key to knowing whether you see untested and potentially unsafe hype or a genuine advancement.

By understanding validity, organizations and consumers can cut through the hype and get to the truth behind the latest technologies.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

file-20251014-56-wis48a.jpg

How this year’s Nobel winners changed the thinking on economic growth

The prizewinners were announced at a ceremony in Stockholm. EPA/ANDERS WIKLUND SWEDEN OUT

What makes some countries rich and others poor? Is there any action a country can take to improve living standards for its citizens? Economists have wondered about this for centuries. If the answer to the second question is yes, then the impact on people’s lives could be staggering.

This year’s Sveriges Riksbank Prize in Economic Sciences (commonly known as the Nobel prize for economics) has gone to three researchers who have provided answers to these questions: Philippe Aghion, Peter Howitt and Joel Mokyr.

For most of human history, economic stagnation has been the norm – modern economic growth is very recent from a historical point of view. This year’s winners have been honoured for their contributions towards explaining how to achieve sustained economic growth.

At the beginning of the 1980s, theories around economic growth were largely dominated by the works of American economist Robert Solow. An important conclusion emerged: in the long-run, per-capita income growth is determined by technological progress.

Solow’s framework, however, did not explain how technology accumulates over time, nor the role of institutions and policies in boosting it. As such, the theory can neither explain why countries grow differently for sustained periods nor what kind of policies could help a country improve its long-run growth performance.

It’s possible to argue that technological innovation comes from the work of scientists, who are motivated less by money than the rest of society might be. As such, there would be little that countries could do to intervene – technological innovations would be the result of the scientists’ own interests and motivations.

But that thinking changed with the emergence of endogenous growth theory, which aims to explain which forces drive innovation. This includes the works of Paul Romer, Nobel prizewinner in 2018, as well as this year’s winners Aghion and Howitt.

These three authors advocate for theories in which technological progress ultimately derives from firms trying to create new products (Romer) or improve the quality of existing products (Aghion and Howitt). For firms to try to break new ground, they need to have the right incentives.

Creative destruction

While Romer recognises the importance of intellectual property rights to reward firms financially for creating new products, the framework of Aghion and Howitt outlines the importance of something known as “creative destruction”.

This is where innovation results from a battle between firms trying to get the best-quality products to meet consumer needs. In their framework, a new innovation means the displacement of an existing one.

In their basic model, protecting intellectual property is important in order to reward firms for innovating. But at the same time, innovations do not come from leaders but from new entrants to the industry. Incumbents do not have the same incentive to innovate because it will not improve their position in the sector. Consequently, too much protection generates barriers to entry and may slow growth.

But what is less explored in their work is the idea that each innovation brings winners (consumers and innovative firms) and losers (firms and workers under the old, displaced technology). These tensions could shape a country’s destiny in terms of growth – as other works have pointed out, the owners of the old technology may try to block innovation.

This is where Mokyr complements these works perfectly by providing a historical context. Mokyr’s work focuses on the origins of the Industrial Revolution and also the history of technological progress from ancient times until today.

Mokyr noted that while scientific discoveries were behind technological progress, a scientific discovery was not a guarantee of technological advances.

It was only when the modern world started to apply the knowledge discovered by scientists to problems that would improve people’s lives that humans saw sustained growth. In Mokyr’s book The Gifts of Athena, he argues that the Enlightenment was behind the change in scientists’ motivations.

illustrated headshots of the 2025 nobel prizewinners in economics.
The 2025 winners Joel Mokyr, Philippe Aghion and Peter Howitt.
Ill. Niklas Elmehed © Nobel Prize Outreach

In Mokyr’s works, for growth to be sustained it is vital that knowledge flows and accumulates. This was the spirit embedded in the Industrial Revolution and it’s what fostered the creation of the institution I am working in – the University of Sheffield, which enjoyed financial support from the steel industry in the 19th century.

Mokyr’s later works emphasise the key role of a culture of knowledge in order for growth to improve living standards. As such, openness to new ideas becomes crucial.

Similarly, Aghion and Howitt’s framework has become a standard tool in economics. It has been used to explore many important questions for human wellbeing: the relationship between competition and innovation, unemployment and growth, growth and income inequality, and globalisation, among many other topics.

Analysis using their framework still has an impact on our lives today. It is present in policy debates around big data, artificial intelligence and green innovation. And Mokyr’s analysis of how knowledge accumulates poses a central question around what countries can do to encourage an innovation ecosystem and improve the lives of their citizens.

But this year’s prize is also a warning about the consequences of damaging the engines of growth. Scientists collaborating with firms to advance living standards is the ultimate elixir for growth. Undermining science, globalisation and competition might not be the right recipe.

The Conversation

Antonio Navas does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.