file-20251013-66-6u10vb.jpg

The 2025 Nobel economics prize honours economic creation and destruction

Economists Joel Mokyr, Philippe Aghion, and Peter Howitt. Ill. Niklas Elmehed © Nobel Prize Outreach

Three economists working in the area of “innovation-driven economic growth” have won this year’s Nobel Memorial Prize in Economic Sciences.

Half of the 11 million Swedish kronor (about A$1.8 million) prize was awarded to Joel Mokyr, a Dutch-born economic historian at Northwestern University.

The other half was jointly awarded to Philippe Aghion, a French economist at Collège de France and INSEAD, and Peter Howitt, a Canadian economist at Brown University.

Collectively, the trio’s work has examined the importance of innovation in driving sustainable economic growth. It has also highlighted that in dynamic economies, old firms die as new firms are being born.

Innovation drives sustainable growth

As noted by the Royal Swedish Academy of Sciences, economic growth has lifted billions of people out of poverty over the past two centuries. While we take this as normal, it is actually very unusual in the broad sweep of history.

The period since around 1800 is the first in human history when there has been sustained economic growth. This warns us we should not be complacent. Poor policy could see economies stagnate again.

One of the Nobel judges gave the example that in Sweden and the United Kingdom there was little improvement in living standards in the four centuries between 1300 and 1700.

Mokyr’s work showed that prior to the Industrial Revolution, innovations were more a matter of trial and error than being based on scientific understanding. He has argued that sustained economic growth would not emerge in:

a world of engineering without mechanics, iron-making without metallurgy, farming without soil science, mining without geology, water-power without hydraulics, dyemaking without organic chemistry, and medical practice without microbiology and immunology.

Mokyr gives the example of sterilising surgical instruments. This had been advocated in the 1840s or earlier. But surgeons were offended by the suggestion they might be transmitting diseases. It was only after the work of Louis Pasteur and Joseph Lister in the 1860s that the role of germs was understood and sterilisation became common.

Mokyr emphasised the importance of society being open to new ideas. As the Nobel committee put it:

practitioners, ready to engage with science, along with a societal climate embracing change, were, according to Mokyr, key reasons why the Industrial Revolution started in Britain.

Winners and losers

This year’s other two laureates, Aghion and Howitt, recognised that innovations create both winning and losing firms. In the US, about 10% of firms enter and 10% leave the market each year. Promoting economic growth requires an understanding of both processes.

Their 1992 article built on earlier work on the concept of “endogenous growth” – the idea that economic growth is
generated by factors inside an economic system, not the result of forces that impinge from outside. This earned a Nobel prize for Paul Romer in 2018.

It also drew on earlier work on “creative destruction” by Joseph Schumpeter.

The model created by Aghion and Howitt implies governments need to be careful how they design subsidies to encourage innovation.

If companies think that any innovation they invest in is just going to be overtaken (meaning they would lose their advantage), they won’t invest as much in innovation.

Their work also supports the idea governments have a role in supporting and retraining those workers who lose their jobs in firms that are displaced by more innovative competitors.

This will build political support for policies that encourage economic growth, as well.

‘Dark clouds’ on the horizon?

The three laureates all favour economic growth, in contrast to growing concerns about the impact of endless growth on the planet.

In an interview after the announcement, however, Aghion called for carbon pricing to make economic growth consistent with reducing greenhouse gas emissions.

He also warned about the gathering “dark clouds” of tariffs; that creating barriers to trade could reduce economic growth.

And he said we need to ensure today’s innovators do not stifle future innovators through anti-competitive practices.

The newest Nobel prize

The economics prize was not one of the five originally nominated in Swedish chemist Alfred Nobel’s will in 1895. It is formally called the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel. It was first awarded in 1969.

The awards to Mokyr and Howitt continue the pattern of the economics prize being dominated by researchers working at US universities.

It also continues the pattern of over-representation of men. Only three of the 99 economics laureates have been women.

Arguably, economics professor Rachel Griffith, rather than Mokyr, could have shared the prize with Aghion and Howitt this year. She co-authored the book Competition and Growth with Aghion, and co-wrote an article on competition with both of them.

The Conversation

John Hawkins does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

file-20251005-56-hl24k8.jpg

Young businesses create 6 in 10 new jobs in Australia – far more than established firms

Chris Putnam/Future Publishing via Getty Images

Governments of all stripes provide support to small businesses in the form of tax concessions, lighter-touch regulation or government grants. They’re called the “engine room” of the economy. But is small really best?

In recent research, my co-authors and I explored this question by looking at the contributions that firms of different ages and size make to the economy.

We found new and young businesses, rather than small, old businesses, are the drivers of economic growth. This matters, as the economic dynamism these young firms drive boosts productivity – the major determinant of incomes in the long run. But government policy is focused on size, which may be holding us back.

Using de-identified data from the Australian Bureau of Statistics that tracks all businesses in Australia, we analysed the economic performance of each individual business in the market sector from 2003 onward – from pubs and cafes to manufacturing.

This includes all business types and sizes, from the corner store to the major corporates. We analysed how many people they employed, their economic value-add (think of it as their contribution to the economy), and their labour productivity (how much stuff they produce for a given amount of workers and hours).

Australia has some 2.7 million small businesses, with 440,000 new businesses started in 2024-25. But our study finds it’s young firms (those aged five years or less) that punch above their weight and have an outsized positive contribution to the economy, while small, old firms (aged over five, and with fewer than 15 employees) have a net negative impact.

Engines of job creation

Our research found young businesses contribute six percentage points to overall annual headcount growth. This compares to small, old firms, which actually reduce overall annual headcount growth by 4.5 percentage points, due to these firms stagnating, shrinking and closing down.

This difference is underlined when we look separately at job creation and job destruction. Young firms contribute 59% of new jobs, while small old firms account for just 16%.

This is even more stark when comparing job losses: small old businesses account for 41% of all job destruction. Large old businesses – often the focus of announced corporate layoffs – account for 18% of job destruction.

So is young best then? As economists like to say – it depends.

We analysed the growth trajectories of young firms and found significant differences.

Of firms that survive to age five, high-performing young firms employ twice the number of workers than the average firm of the same age, and are over 40% more productive.

But the typical new business (in its first year of activity) is relatively small, employing only around two people. And it stops growing relatively quickly – on average new firms plateau after two years of operation. This highlights the vast differences in firm types among young firms.

This might not be surprising to some readers; not all new businesses are started with the goal of being the next Atlassian or Canva.

People start businesses for a range of reasons: whether you’re a lawyer who’d rather be your own boss than work for a large corporation; an IT worker who recently had a child and values control over the flexibility of your time; or a tradie who benefits from the tax implications of running your own business.

Smarter ways to support all businesses

This highlights the importance of policymakers being clear on what they’re trying to achieve when providing subsidies and support to businesses.

Our analysis suggests if the policy goal is to spur economic growth and employment, then targeting assistance to small businesses is poor policy. But this doesn’t necessarily mean we should take that assistance and give it to young firms instead.

Since a small number of high-performing young firms drive economic growth, we won’t always know which young firms these will be. Policy that subsidises young firms would potentially still be ineffective. And we know government has a chequered history with picking winners – see the more than A$30 billion provided to the car manufacturing sector.

So, what should government do?

One often overlooked and potentially counterintuitive finding from our research is the role of firm “exits” – businesses closing down or moving onto new ventures. Firms that exit are 20% less productive than the average firm in their industry five years before they close down, and their productivity declines further as they approach closure.

But the rate of business closures in Australia has been declining over time. Policies that remove impediments from orderly business closure, including supporting affected workers, would help workers and capital to be re-allocated to more productive and innovative firms.

Specific business assistance and targeting is always fraught with difficulty. Policymakers can instead focus on broader policy settings that are conducive to growth, and that apply to all firms rather than just a subset.

These efforts, such as streamlining regulation and ensuring it is fit for purpose for all businesses, would be in line with some of the principles and reform directions agreed at Treasurer Jim Chalmers’ economic reform roundtable earlier this year.

The author thanks Rachel Lee and Ewan Rankin, researchers at the e61 Institute, for their contribution to this article.

The Conversation

Lachlan Vass is affiliated with the e61 Institute.

file-20250714-56-5rzp48.jpg

Who invented the light bulb?

Eureka, what an idea! TU IS/iStock/Getty Images Plus

Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to CuriousKidsUS@theconversation.com.


Who invented the light bulb? – Preben, age 5, New York City


When people name the most important inventions in history, light bulbs are usually on the list. They were much safer than earlier light sources, and they made more activities, for both work and play, possible after the Sun went down.

More than a century after its invention, illustrators still use a lit bulb to symbolize a great idea. Credit typically goes to inventor and entrepreneur Thomas Edison, who created the first commercial light and power system in the United States.

But as a historian and author of a book about how electric lighting changed the U.S., I know that the actual story is more complicated and interesting. It shows that complex inventions are not created by a single genius, no matter how talented he or she may be, but by many creative minds and hands working on the same problem.

Thomas Edison didn’t invent the basic design of the incandescent light bulb, but he made it reliable and commercially viable.

Making light − and delivering it

In the 1870s, Edison raced against other inventors to find a way of producing light from electric current. Americans were keen to give up their gas and kerosene lamps for something that promised to be cleaner and safer. Candles offered little light and posed a fire hazard. Some customers in cities had brighter gas lamps, but they were expensive, hard to operate and polluted the air.

When Edison began working on the challenge, he learned from many other inventors’ ideas and failed experiments. They all were trying to figure out how to send a current through a thin carbon thread encased in glass, making it hot enough to glow without burning out.

In England, for example, chemist Joseph Swan patented an incandescent bulb and lit his own house in 1878. Then in 1881, at a great exhibition on electricity in Paris, Edison and several other inventors demonstrated their light bulbs.

Edison’s version proved to be the brightest and longest-lasting. In 1882 he connected it to a full working system that lit up dozens of homes and offices in downtown Manhattan.

But Edison’s bulb was just one piece of a much more complicated system that included an efficient dynamo – the powerful machine that generated electricity – plus a network of underground wires and new types of lamps. Edison also created the meter, a device that measured how much electricity each household used, so that he could tell how much to charge his customers.

Edison’s invention wasn’t just a science experiment – it was a commercial product that many people proved eager to buy.

Inventing an invention factory

As I show in my book, Edison did not solve these many technical challenges on his own.

At his farmhouse laboratory in Menlo Park, New Jersey, Edison hired a team of skilled technicians and trained scientists, and he filled his lab with every possible tool and material. He liked to boast that he had only a fourth grade education, but he knew enough to recruit men who had the skills he lacked. Edison also convinced banker J.P. Morgan and other investors to provide financial backing to pay for his experiments and bring them to market.

Historians often say that Edison’s greatest invention was this collaborative workshop, which he called an “invention factory.” It was capable of launching amazing new machines on a regular basis. Edison set the agenda for its work – a role that earned him the nickname “the wizard of Menlo Park.”

Here was the beginning of what we now call “research and development” – the network of universities and laboratories that produce technological breakthroughs today, ranging from lifesaving vaccines to the internet, as well as many improvements in the electric lights we use now.

Sparking an electric revolution

Many people found creative ways to use Edison’s light bulb. Factory owners and office managers installed electric light to extend the workday past sunset. Others used it for fun purposes, such as movie marquees, amusement parks, store windows, Christmas trees and evening baseball games.

Theater directors and photographers adapted the light to their arts. Doctors used small bulbs to peer inside the body during surgery. Architects and city planners, sign-makers and deep-sea explorers adapted the new light for all kinds of specialized uses. Through their actions, humanity’s relationship to day and night was reinvented – often in ways that Edison never could have anticipated.

Today people take for granted that they can have all the light they need at the flick of a switch. But that luxury requires a network of power stations, transmission lines and utility poles, managed by teams of trained engineers and electricians. To deliver it, electric power companies grew into an industry monitored by insurance companies and public utility regulators.

Edison’s first fragile light bulbs were just one early step in the electric revolution that has helped create today’s richly illuminated world.


Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age and the city where you live.

And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.

The Conversation

Ernest Freeberg does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

file-20250911-56-v9oqot.jpg

Proposed cuts to NIH funding would have ripple effects on research that could hamper the US for decades

The NIH is a node in an interconnected system producing health and medical advances. Anchalee Phanmaha/Moment via Getty Images

In May 2025, the White House proposed reducing the budget of the National Institutes of Health by roughly 40% – from about US$48 billion to $27 billion. Such a move would return NIH funding to levels last seen in 2007. Since NIH budget records began in 1938, NIH has seen only one previous double-digit cut: a 12% reduction in 1952.

Congress is now tasked with finalizing the budget ahead of the new fiscal year, which begins Oct. 1. In July, the Senate rejected the White House’s proposed cuts and instead advanced a modest increase. And in early September, the House of Representatives also supported a budget that maintains the agency’s current funding levels.

However, talk of cutting NIH funding is not a new development. Such proposals tend to resurface from time to time, and the ongoing discussion has created uncertainty about the stability of research overall and prompted concern among scientists about the future of their work.

As researchers studying complex health policy systems – and specifically, science funding policy – we see the NIH as one node in an interconnected system that supports the discovery of new knowledge, trains the biomedical workforce and makes possible medical and public health advances across the U.S.

Our research shows that while cutting NIH funding may appear to save money in the short term, it can trigger a chain of effects that increase long-term health care costs and slow the development of new treatments and public health solutions over time.

Seeing the bigger picture of NIH funding

NIH funding does not just support the work of individual researchers and laboratories. It shapes the foundation of American science and health care by training scientists, supporting preventive health research and creating the knowledge that biomedical companies can later build into new products.

To understand how funding cuts may affect scientific progress, the training of new researchers and the availability of new treatments, we took a broad look at existing evidence. We reviewed studies and data that connect NIH funding, or biomedical research more generally, to outcomes such as innovation, workforce development and public health.

In a study published in July 2025, we built a simple framework to show how changes in one part of the system – research grants, for example – can lead to changes in others, like fewer training opportunities or slower development of new therapies.

Eroding the basic research foundation

The NIH funds early-stage research that lacks immediate commercial value but provides the building blocks for future innovations. This includes projects that map disease pathways, develop new laboratory methods or collect large datasets that researchers use for decades.

For example, NIH-supported research in the 1950s identified cholesterol and its role in disease pathways for heart disease, helping to lay the groundwork for the later discovery of statins used by millions of people to lower cholesterol levels. Cancer biology research in the 1960s led to the discovery of cisplatin, a chemotherapy prescribed to 10% to 20% of cancer patients. Basic research in the 1980s on how the kidneys handle sugar helped pave the way for a new class of drugs for Type 2 diabetes, some of which are also used for weight management. Diabetes affects about 38 million Americans, and obesity affects more than 40% of the adults in the U.S.

A cancer patient receives chemotherapy in a clinic
Cisplatin, a chemotherapy widely used today, was developed through NIH-supported cancer biology research.
FatCamera/E+ via Getty Images

Without this kind of public, taxpayer-funded investment, many foundational projects would never begin, because private firms rarely take on work with long timelines or unclear profits. Our study did not estimate dollar amounts, but the evidence we reviewed shows that when public research slows, downstream innovation and economic benefits are also delayed. That can mean fewer new treatments, slower adoption of cost-saving technologies and reduced growth in industries that depend on scientific advances.

Reducing the scientific workforce

By providing grants that support students, postdoctoral researchers and early-career investigators, along with the labs and facilities where they train, the NIH also plays a central role in preparing up-and-coming scientists.

When funding is cut, fewer positions are available and some labs face closure. This can discourage young researchers from entering or staying in the field. The effect extends beyond academic research. Some NIH-trained scientists later move into biotechnology, medical device companies and data science roles. A weaker training system today means fewer skilled professionals across the broader economy tomorrow.

For example, NIH programs have produced not only academic researchers but also engineers and analysts who now work on immune therapies, brain-computer interfaces, diagnostics and AI-driven tools, as well as other technologies in startups and in more established biotech and pharmaceutical companies.

If those training opportunities shrink, biotech and pharmaceutical industries may have less access to talent. A weakened NIH-supported workforce may also risk eroding U.S. global competitiveness, even in the private sector.

Innovation shifts toward narrow markets

Public and private investment serve different purposes. NIH funding often reduces scientific risk by advancing projects to a stage where companies can invest with greater confidence. Past examples include support for imaging physics that led to MRI and PET scans and early materials science research that enabled modern prosthetics.

Our research highlights the fact that when public investment recedes, companies tend to focus on products with clearer near-term returns. That may tilt innovation toward specialty drugs or technologies with high launch prices and away from improvements that serve broader needs, such as more effective use of existing therapies or widely accessible diagnostics.

Surgeon examines an MRI of the brain
Imaging technologies such as MRI were developed through NIH funding for basic research.
Tunvarat Pruksachat/Moment via Getty Images

Some cancer drugs, for instance, relied heavily on NIH-supported basic science discoveries in cell biology and clinical trial design. Independent studies have documented that without this early publicly supported work, development timelines lengthen and costs increase, which can translate into higher prices for patients and health systems. When public funding shrinks and companies shift toward expensive products instead of lower-cost improvements, overall health spending can rise.

What looks like a budget saving in the near term can therefore have the opposite effect, with government programs such as Medicare and Medicaid ultimately shouldering higher costs.

Prevention and public health are sidelined

NIH is also a major funder of research aimed at promoting health and preventing disease. This includes studies on nutrition, chronic diseases, maternal health and environmental exposures such as lead or air pollution.

These projects often improve health long before disease becomes severe, but they rarely attract private investment because their benefits unfold gradually and do not translate into direct profits.

Delaying or canceling prevention research can result in higher costs later, as more people require intensive treatment for conditions that could have been avoided or managed earlier. For example, decades of observation in the Framingham Heart Study shaped treatment guidelines for risk factors such as high blood pressure and heart rhythm disorders. Now this cornerstone of prevention helps to avert heart attacks and strokes, which are far more risky and costly to treat.

A broader shift in direction?

Beyond these specific areas, the larger issue is how the U.S. will choose to support science and medical research going forward. For decades, public investment has enabled researchers to take on difficult questions and conduct decades-long studies. This support has contributed to advances ranging from psychosocial therapies for depression to surgical methods for liver transplants that do not fit neatly into market priorities, unlike drugs or devices.

If government support weakens, medical and health research may become more dependent on commercial markets and philanthropic donors. That can narrow the kinds of problems studied and limit flexibility to respond to urgent needs such as emerging infections or climate-related health risks.

Countries that sustain public investment may also gain an edge by attracting top researchers and setting global standards for new technologies.

On the other hand, once opportunities are lost and talent is dispersed, rebuilding takes far more time and resources.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

file-20250829-238428-mu6ert.jpg

Fewer than 1 in 4 Australians work in a gender-balanced occupation. Fixing it is in all our interests

Claudio Schwarz/Unsplash, CC BY

Australia’s workforce is almost evenly split between men and women. Yet fewer than one in four Australians work in a gender-balanced occupation.

This has improved over time, but at a glacial rate. In 1990, more than half of men (52%) worked in occupations that were more than 80% male. Thirty-five years on, that figure has only declined to 41% of men.

Meanwhile, the share of women in female-dominated occupations (which are more than 60% female) has largely hovered between 60% to 65% since the mid-1990s.

This also holds true within industries. Fewer than half of all employees are in gender-balanced industries, and three of the five industries with the largest workforces – health care and social assistance, construction, and education and training – have become even more segregated since 1990.

Between 2006 and 2021, just one in five occupations became less segregated.

Why does it matter? Turns out, it’s bad for workers, businesses and the economy.

The hit to incomes and productivity

As this graph shows, while most Australian industries have improved their gender balance between 1990 and 2025, one-third remain male-dominated.



Productivity and income are lower when men and women are channelled into different jobs.

Income per person is lower when women are under-represented in entrepreneurial positions and in highly-skilled occupations.

From scientific teams to company boards, international research shows innovation is higher in gender-diverse teams.

Segregation also contributes to the gender pay gap and inequality.

Nearly one-quarter of the pay gap in Australia is attributable to segregation within occupations and industries. Pay gaps are largest in the most segregated jobs. And there is some international evidence that suggests pay and prestige fall as the share of women in a role rises.

Businesses suffer when they are not drawing from the full talent pool when hiring workers. They have fewer applicants per job vacancy, which can lead to lower-quality hires or difficulty filling jobs. This makes the labour market less efficient and unemployment structurally higher.

Individuals pay a price too. People face social stigma and increased risk of harassment when working in non-traditional roles. And their ability to move between jobs as the economy changes and new opportunities emerge is restricted.

Different strategies for different industries

Gender segregation in Australia has proven resistant to the rapid rise in female educational attainment and the near-convergence in labour force participation rates between men and women.

That’s probably because gender segregation is driven by several linked and self-reinforcing factors: education pathways and gender norms, the unequal distribution of unpaid work, workplace cultures, and low pay in feminised industries.




Read more:
New study finds the gender earnings gap could be halved if we reined in the long hours often worked by men


If we are to really shift the dial on this deep-seated problem, a mix of targeted and broader economic policies will be needed, alongside buy-in from businesses, workers, and society at large.

In female-dominated industries and occupations, which tend to be low-paid, the most direct lever available to governments is pay.

Higher wages should attract more workers – male and female. The federal government has committed significant funding to wage increases for aged care and childcare workers. But it should also focus on improving their working conditions.

In male-dominated industries and occupations, improving workplace cultures should be a priority. Here, most levers are in the hands of businesses.

Improving culture requires firms to have appropriate recruitment, conduct, family, and performance-evaluation policies. Plus commitments from senior leadership to model and enforce those policies.

Australia is making progress in these areas. But there’s still more work to do on reshaping gender norms.

Shifting gender norms

Gender norms are not static: they change over time and can be influenced by policy settings.

There is strong evidence that more gender-equal uptake of parental leave leads to more gender-equal attitudes among adults and their children, and a more equal distribution of unpaid work over time.

A happy, smiling baby and a dad with a beard kissing her cheek.

Mikael Stenberg/Unsplash, CC BY

Too many men do not take the leave on offer, both from the government and, increasingly, from their employer. Changing that will require changing societal attitudes, to normalise men providing care.

The government should consider extending the use-it-or-lose-it component of paid parental leave from four weeks to six, to encourage more men to take it.

Things can change

The gender imbalance of occupations and industries is slow to change, but it is not immutable.

International variation in which gender dominates a particular job suggests that existing patterns need not be permanent. Indeed, occupations can reach a tipping point, after which improvement in the gender balance rapidly accelerates.

Progress leads to more progress, as people respond to policies and cultures change.

The job aspirations of teenage boys and girls reflect the current gender balance of the labour market. But over time, and with concerted effort, we can shift what future generations think is possible.


Correction: An earlier version of the chart in this article contained erroneous data for 2025. This has been amended.

The Conversation

The Grattan Institute began with contributions to its endowment of $15 million from each of the Federal and Victorian Governments, $4 million from BHP Billiton, and $1 million from NAB. In order to safeguard its independence, Grattan Institute’s board controls this endowment. The funds are invested and contribute to funding Grattan Institute's activities. Grattan Institute also receives funding from corporates, foundations, and individuals to support its general activities as disclosed on its website.

file-20250702-56-x9zmh2.jpg

From glass and steel to rare earth metals, new materials have changed society throughout history

Steel played a large role in the Industrial Revolution. Monty Rakusen/DigitalVision via Getty Images

Many modern devices – from cellphones and computers to electric vehicles and wind turbines – rely on strong magnets made from a type of minerals called rare earths. As the systems and infrastructure used in daily life have turned digital and the United States has moved toward renewable energy, accessing these minerals has become critical – and the markets for these elements have grown rapidly.

Modern society now uses rare earth magnets in everything from national defense, where magnet-based systems are integral to missile guidance and aircraft, to the clean energy transition, which depends on wind turbines and electric vehicles.

The rapid growth of the rare earth metal trade and its effects on society isn’t the only case study of its kind. Throughout history, materials have quietly shaped the trajectory of human civilization. They form the tools people use, the buildings they inhabit, the devices that mediate their relationships and the systems that structure economies. Newly discovered materials can set off ripple effects that shape industries, shift geopolitical balances and transform people’s daily habits.

Materials science is the study of the atomic structure, properties, processing and performance of materials. In many ways, materials science is a discipline of immense social consequence.

As a materials scientist, I’m interested in what can happen when new materials become available. Glass, steel and rare earth magnets are all examples of how innovation in materials science has driven technological change and, as a result, shaped global economies, politics and the environment.

A diagram showing red arrows, labeled 'politics in' 'society in' 'environment in' 'technology in' etc, leading to a box labeled 'innovation' with arrows pointing away from that box with the same labels but 'out' instead of 'in.'
How innovation shapes society: Pressures from societal and political interests (orange arrows) drive the creation of new materials and the technologies that such materials enable (center). The ripple effects resulting from people using these technologies change the entire fabric of society (blue arrows).
Peter Mullner

Glass lenses and the scientific revolution

In the early 13th century, after the sacking of Constantinople, some excellent Byzantine glassmakers left their homes to settle in Venice – at the time a powerful economic and political center. The local nobility welcomed the glassmakers’ beautiful wares. However, to prevent the glass furnaces from causing fires, the nobles exiled the glassmakers – under penalty of death – to the island of Murano.

Murano became a center for glass craftsmanship. In the 15th century, the glassmaker Angelo Barovier experimented with adding the ash from burned plants, which contained a chemical substance called potash, to the glass.

The potash reduced the melting temperature and made liquid glass more fluid. It also eliminated bubbles in the glass and improved optical clarity. This transparent glass was later used in magnifying lenses and spectacles.

Johannes Gutenberg’s printing press, completed in 1455, made reading more accessible to people across Europe. With it came a need for reading glasses, which grew popular among scholars, merchants and clergy – enough that spectacle-making became an established profession.

By the early 17th century, glass lenses evolved into compound optical devices. Galileo Galilei pointed a telescope toward celestial bodies, while Antonie van Leeuwenhoek discovered microbial life with a microscope.

A large round, convex glass lens mounted on a metal stand, with a technician wearing scrubs looking at it.
The glass lens of the Vera Rubin Observatory, which surveys the night sky.
Large Synoptic Survey Telescope/Vera Rubin Observatory, CC BY

Lens-based instruments have been transformative. Telescopes have redefined long-standing cosmological views. Microscopes have opened entirely new fields in biology and medicine.

These changes marked the dawn of empirical science, where observation and measurement drove the creation of knowledge. Today, the James Webb Space Telescope and the Vera C. Rubin Observatory continue those early telescopes’ legacies of knowledge creation.

Steel and empires

In the late 18th and 19th centuries, the Industrial Revolution created demand for stronger, more reliable materials for machines, railroads, ships and infrastructure. The material that emerged was steel, which is strong, durable and cheap. Steel is a mixture of mostly iron, with small amounts of carbon and other elements added.

Countries with large-scale steel manufacturing once had outsized economic and political power and influence over geopolitical decisions. For example, the British Parliament intended to prevent the colonies from exporting finished steel with the iron act of 1750. They wanted the colonies’ raw iron as supply for their steel industry in England.

Benjamin Huntsman invented a smelting process using 3-foot tall ceramic vessels, called crucibles, in 18th-century Sheffield. Huntsman’s crucible process produced higher-quality steel for tools and weapons.

One hundred years later, Henry Bessemer developed the oxygen-blowing steelmaking process, which drastically increased production speed and lowered costs. In the United States, figures such as Andrew Carnegie created a vast industry based on Bessemer’s process.

The widespread availability of steel transformed how societies built, traveled and defended themselves. Skyscrapers and transit systems made of steel allowed cities to grow, steel-built battleships and tanks empowered militaries, and cars containing steel became staples in consumer life.

Bright hot metal pouring out of a large metal furnace.
White-hot steel pouring out of an electric arc furnace in Brackenridge, Penn.
Alfred T. Palmer/U.S. Library of Congress

Control over steel resources and infrastructure made steel a foundation of national power. China’s 21st-century rise to steel dominance is a continuation of this pattern. From 1995 to 2015, China’s contribution to the world steel production increased from about 10% to more than 50%. The White House responded in 2018 with massive tariffs on Chinese steel.

Rare earth metals and global trade

Early in the 21st century, the advance of digital technologies and the transition to an economy based on renewable energies created a demand for rare earth elements.

A wind turbine with three thin blades rising out of the water.
Offshore turbines use several tons of rare earth magnets to transform wind into electricity.
Hans Hillewaert/Wikimedia Commons, CC BY-SA

Rare earth elements are 17 chemically very similar elements, including neodymium, dysprosium, samarium and others. They occur in nature in bundles and are the ingredients that make magnets super strong and useful. They are necessary for highly efficient electric motors, wind turbines and electronic devices.

Because of their chemical similarity, separating and purifying rare earth elements involves complex and expensive processes.

China controls the majority of global rare earth processing capacity. Political tensions between countries, especially around trade tariffs and strategic competition, can risk shortages or disruptions in the supply chain.

The rare earth metals case illustrates how a single category of materials can shape trade policy, industrial planning and even diplomatic alliances.

Six small piles of rock
Mining rare earth elements has allowed for the widespread adoption of many modern technologies.
Peggy Greb, USDA

Technological transformation begins with societal pressure. New materials create opportunities for scientific and engineering breakthroughs. Once a material proves useful, it quickly becomes woven into the fabric of daily life and broader systems. With each innovation, the material world subtly reorganizes the social world — redefining what is possible, desirable and normal.

Understanding how societies respond to new innovations in materials science can help today’s engineers and scientists solve crises in sustainability and security. Every technical decision is, in some ways, a cultural one, and every material has a story that extends far beyond its molecular structure.

The Conversation

The National Science Foundation, the Department of Energy, NASA, and other national and regional agencies have funded former research of Peter Mullner.

file-20250624-56-hxaseq.jpg

Michelin Guide scrutiny could boost Philly tourism, but will it stifle chefs’ freedom to experiment and innovate?

Chef Phila Lorn prepares a bowl of noodle soup at Mawn restaurant in Philadelphia. AP Photo/Matt Rourke

The Philadelphia restaurant scene is abuzz with the news that the famed Michelin Guide is coming to town.

As a research chef and educator at Drexel University in Philadelphia, I am following the Michelin developments closely.

Having eaten in Michelin restaurants in other cities, I am confident that Philly has at least a few star-worthy restaurants. Our innovative dining scene was named one of the top 10 in the U.S. by Food & Wine in 2025.

Researchers have convincingly shown that Michelin ratings can boost tourism, so Philly gaining some starred restaurants could bring more revenue for the city.

But as the lead author of the textbook “Culinary Improvisation,” which teaches creativity, I also worry the Michelin scrutiny could make chefs more focused on delivering a consistent experience than continuing along the innovative trajectory that attracts Michelin in the first place.

Ingredients for culinary innovation

In “Culinary Improvisation” we discuss three elements needed to foster innovation in the kitchen.

The first is mastery of culinary technique, both classical and modern. Simply stated, this refers to good cooking.

The second is access to a diverse range of ingredients and flavors. The more colors the artist has on their palette, the more directions the creation can take.

And the third, which is key to my concerns, is a collaborative and supportive environment where chefs can take risks and make mistakes. Research shows a close link between risk-taking workplaces and innovation.

According to the Michelin Guide, stars are awarded to outstanding restaurants based on: “quality of ingredients, mastery of cooking techniques and flavors, the personality of the chef as expressed in the cuisine, value for money, and consistency of the dining experience both across the menu and over time.”

The criteria do not mention innovation.

It’s possible the high-stakes lure of a Michelin star, which awards consistent excellence, could lead Philly’s most vibrant and creative chefs and restaurateurs to pull back on the risks that led to the city’s culinary excellence in the first place.

A line of chefs wearing black aprons at work in an open kitchen.
Local food writers believe Vernick Fish is a top contender for a Michelin star.
Photo courtesy of Vernick Fish

The obvious contenders

Philadelphia’s preeminent restaurant critic Craig LaBan and journalist and former restaurateur Kiki Aranita discussed local contenders for Michelin stars in a recent article in the Philadelphia Inquirer.

The 19 restaurants LaBan and Aranita discuss as possible star contenders average just over a one-mile walk from the Pennsylvania Convention Center.

Together they have received 78 James Beard nominations or awards, which are considered the “Oscars” of the food industry. That’s an average of over four per restaurant.

And when I tried to book a table for two on a Wednesday and Saturday before 9 p.m., about half were already fully booked for dinner two weeks out, in July, which is the slow season for dining in Philadelphia.

If LaBan’s and Aranita’s predictions are right, Michelin will be an added recognition for restaurants that are already successful and centrally located.

Exterior shot of a restaurant with outdoor seating in ground floor of rowhome
Black Dragon Takeout fuses Black American cuisine with the aesthetics of classic Chinese American takeout.
Jeff Fusco/The Conversation, CC BY-SA

Off the beaten path

When the Michelin Guide started in France at the turn of the 19th century, it encouraged diners to take the road less traveled to their next gastronomic experience.

It has since evolved into recommendations for a road well traveled: safe, lauded and already hard-to-get-into restaurants. In Philly these could be restaurants such as Vetri Cucina, Zahav, Vernick Fish, Provenance, Royal Sushi and Izakaya, Ogawa and Friday Saturday Sunday, to name a few on LaBan and Aranita’s list.

And yet Philadelphia has over 6,000 restaurants spread across 135 square miles of the city. Philadelphia is known as a city of neighborhoods, and these neighborhoods are rich with food diversity and innovation.

Consider Jacob Trinh’s Vietnamese-tinged seafood tasting menu at Little Fish in Queen Village; Kurt Evans’ gumbo lo mein at Black Dragon Takeout in West Philly; the beef cheek confit with avocado mousse at Temir Satybaldiev’s Ginger in the Northeast; and the West African XO sauce at Honeysuckle, owned by Omar Tate and Cybille St.Aude-Tate, on North Broad Street.

I hope the Michelin inspectors will venture far beyond the obvious candidates to experience more of what Philadelphia has to offer.

Small stacks of red hardback books that say 'Michelin France 2025'
The Michelin Guide announced it will include Philadelphia and Boston in its next Northeast Cities edition.
Matthieu Delaty/Hans Lucas/AFP via Getty Images

Raising the bar

In the frenzy surrounding the Michelin scrutiny, chef friends have invited me to dine at their restaurants and share my feedback as they refine their menus in anticipation of visits from anonymous Michelin inspectors.

Restaurateurs have been asking my colleagues and me for talent suggestions to replace well-liked and capable cooks, servers and managers whom owners perceive to be just not Michelin-star level.

And managers are texting us names of suspected reviewers, triggered by some tell-tale signs – a solo diner with a weeknight tasting menu reservation, no dietary restrictions or special requests, and a conspicuously light internet presence.

In all, I am excited about Philadelphians being excited about Michelin. Any opportunity to spotlight the city’s restaurant community and tighten its food and service quality raises the bar among local chefs and restaurateurs and makes the experience better for diners. And the prospect of business travelers and culinary tourists enjoying lunches and early-week dinners can help restaurants, their workers and the city earn more revenue.

But in the din of the press events and hype, let’s not forget that Philadelphians don’t need an outside arbiter to tell us what we already know: Philly is a great place to eat and drink.

Read more of our stories about Philadelphia.

The Conversation

Jonathan Deutsch does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

file-20250616-56-qvf5g4.jpg

How the end of carbon capture could spark a new industrial revolution

Steelmaking uses a lot of energy, making it one of the highest greenhouse gas-emitting industries.
David McNew/Getty Images

The U.S. Department of Energy’s decision to claw back US$3.7 billion in grants from industrial demonstration projects may create an unexpected opening for American manufacturing.

Many of the grant recipients were deploying carbon capture and storage – technologies that are designed to prevent industrial carbon pollution from entering the atmosphere by capturing it and injecting it deep underground. The approach has long been considered critical for reducing the contributions chemicals, cement production and other heavy industries make to climate change.

However, the U.S. policy reversal could paradoxically accelerate emissions cuts from the industrial sector.

An emissions reality check

Heavy industry is widely viewed as the toughest part of the economy to clean up.

The U.S. power sector has made progress, cutting emissions 35% since 2005 as coal-fired power plants were replaced with cheaper natural gas, solar and wind energy. More than 93% of new grid capacity installed in the U.S. in 2025 was forecast to be solar, wind and batteries. In transportation, electric vehicles are the fastest-growing segment of the U.S. automotive market and will lead to meaningful reductions in pollution.

But U.S. industrial emissions have been mostly unchanged, in part because of the massive amount of coal, gas and oil required to make steel, concrete, aluminum, glass and chemicals. Together these materials account for about 22% of U.S. greenhouse gas emissions.

The global industrial landscape is changing, though, and U.S. industries cannot, in isolation, expect that yesterday’s means of production will be able to compete in a global marketplace.

Even without domestic mandates to reduce their emissions, U.S. industries face powerful economic pressures. The EU’s new Carbon Border Adjustment Mechanism imposes a tax on the emissions associated with imported steel, chemicals, cement and aluminum entering European markets. Similar policies are being considered by Canada, Japan, Singapore, South Korea and the United Kingdom, and were even floated in the United States.

The false promise of carbon capture

The appeal of carbon capture and storage, in theory, was that it could be bolted on to an existing factory with minimal changes to the core process and the carbon pollution would go away.

Government incentives for carbon capture allow producers to keep using polluting technologies and prop up gas-powered chemical production or coal-powered concrete production.

The Trump administration’s pullback of carbon capture and storage grants now removes some of these artificial supports.

Without the expectation that carbon capture will help them meet regulations, this may create space to focus on materials breakthroughs that could revolutionize manufacturing while solving industries’ emissions problems.

The materials innovation opportunity

So, what might emissions-lowering innovation look like for industries such as cement, steel and chemicals? As a civil and environmental engineer who has worked on federal industrial policy, I study the ways these industries intersect with U.S. economic competitiveness and our built environment.

There are many examples of U.S. innovation to be excited about. Consider just a few industries:

Cement: Cement is one of the most widely used materials on Earth, but the technology has changed little over the past 150 years. Today, its production generates roughly 8% of total global carbon pollution. If cement production were a country, it would rank third globally after China and the United States.

Researchers are looking at ways to make concrete that can shed heat or be lighter in weight to significantly reduce the cost of building and cooling a home. Sublime Systems developed a way to produce cement with electricity instead of coal or gas. The company lost its IDP grant in May 2025, but it has a new agreement with Microsoft.

Making concrete do more could accelerate the transition. Researchers at Stanford and separately at MIT are developing concrete that can act as a capacitor and store over 10 kilowatt-hours of energy per cubic meter. Such materials could potentially store electricity from your solar roof or allow for roadways that can charge cars in motion.

How concrete could be used as a capacitor. MIT.

Technologies like these could give U.S. companies a competitive advantage while lowering emissions. Heat-shedding concrete cuts air conditioning demand, lighter formulations require less material per structure, and energy-storing concrete could potentially replace carbon-intensive battery manufacturing.

Steel and iron: Steel and iron production generate about 7% of global emissions with centuries-old blast furnace processes that use intense heat to melt iron ore and burn off impurities. A hydrogen-based steelmaking alternative exists today that emits only water vapor, but it requires new supply chains, infrastructure and production techniques.

U.S. Steel has been developing techniques to create stronger microstructures within steel for constructing structures with 50% less material and more strength than conventional designs. When a skyscraper needs that much less steel to achieve the same structural integrity, that eliminates millions of tons of iron ore mining, coal-fired blast furnace operations and transportation emissions.

Chemicals: Chemical manufacturing has created simultaneous crises over the past 50 years: PFAS “forever chemicals” and microplastics have been showing up in human blood and across ecosystems, and the industry generates a large share of U.S. industrial emissions.

Companies are developing ways to produce chemicals using engineered enzymes instead of traditional petrochemical processes, achieving 90% lower emissions in a way that could reduce production costs. These bio-based chemicals can naturally biodegrade, and the chemical processes operate at room temperature instead of requiring high heat that uses a lot of energy.

Is there a silver bullet without carbon capture?

While carbon capture and storage might not be the silver bullet for reducing emissions that many people thought it would be, new technologies for managing industrial heat might turn out to be the closest thing to one.

Most industrial processes require temperatures between 300 and 1830 degrees Fahrenheit (150 and 1000 degrees Celsisus for everything from food processing to steel production. Currently, industries burn fossil fuels directly to generate this heat, creating emissions that electric alternatives cannot easily replace. Heat batteries may offer a breakthrough solution by storing renewable electricity as thermal energy, then releasing that heat on demand for industrial processes.

How thermal batteries work. CNBC.

Companies such as Rondo Energy are developing systems that store wind and solar power in bricklike materials heated to extreme temperatures. Essentially, they convert electricity into heat during times when electricity is abundant, usually at night. A manufacturing facility can later use that heat, which allows it to reduce energy costs and improve grid reliability by not drawing power at the busiest times. The Trump administration cut funding for projects working with Rondo’s technology, but the company’s products are being tested in other countries.

Industrial heat pumps provide another pathway by amplifying waste heat to reach the high temperatures manufacturing requires, without using as much fossil fuel.

The path forward

The Department of Energy’s decision forces industrial America into a defining moment. One path leads backward toward pollution-intensive business as usual propping up obsolete processes. The other path drives forward through innovation.

Carbon capture offered an expensive Band-Aid on old technology. Investing in materials innovation and new techniques for making them promises fundamental transformation for the future.

The Conversation

Andres Clarens receives funding from the National Science Foundation and the Alfred P Sloan Foundation.

Stone tools from a cave on South Africa’s coast speak of life at the end of the Ice Age

The Earth of the last Ice Age (about 26,000 to 19,000 years ago) was very different from today’s world.

In the northern hemisphere, ice sheets up to 8 kilometres tall covered much of Europe, Asia and North America, while much of the southern hemisphere became drier as water was drawn into the northern glaciers.

As more and more water was transformed into ice, global sea levels dropped as much as 125 metres from where they are now, exposing land that had been under the ocean.

In southernmost Africa, receding coastlines exposed an area of the continental shelf known as the Palaeo-Agulhas Plain. At its maximum extent, it covered an area of about 36,000km² along the south coast of what’s now South Africa.

This now – extinct ecosystem was a highly productive landscape with abundant grasslands, wetlands, permanent water drainage systems, and seasonal flood plains. The Palaeo-Agulhas Plain was likely most similar to the present day Serengeti in east Africa. It would likely have been able to support large herds of migratory animals and the people who hunted them.

We now know more about how these people lived thanks to data from a new archaeological site called Knysna Eastern Heads Cave 1.

The site sits 23 metres above sea level on the southern coast of South Africa overlooking the Indian Ocean. You can watch whales from the site today, but during the Ice Age the ocean was nowhere to be seen. Instead, the site looked out over the vast grasslands; the coast was 75 kilometres away.

Archaeological investigation of the cave began in 2014, led by Naomi Cleghorn of the University of Texas. This work shows that humans have been using the site for much of the last 48,000 years or more. Occupations bridge the Middle to Later Stone Age transition, which occurred sometime between about 40,000 and 25,000 years ago in southern Africa.

That transition is a time period where we see dramatic changes in the technologies people were using, including changes in raw materials selected for making tools and a shift towards smaller tools. These changes are poorly understood due to a lack of sites with occupations dating to this time. Knysna Eastern Heads Cave 1 is the first site on the southern coast that provides a continuous occupational record near the end of the Pleistocene (Ice Age) and documents how life changed for people living on the edge of the Palaeo-Agulhas Plain.

Before the Ice Age, people there collected marine resources like shellfish when the coastline was close to the site. As the climate began to cool and sea levels dropped, they shifted their focus to land-based resources and game animals.

I am one of the archaeologists who have been working here. In a new study, my colleagues and I analysed stone tools from the cave that date to about 19,000 to 18,000 years ago, and discussed how the techniques used to make them hint at the ways that prehistoric people travelled, interacted, and shared their craft.

Based on this analysis, we think the cave may have been used as a temporary camp rather than a primary residence. And the similarity of the tools with those from other sites suggests people were connected over a huge region and shared ideas with each other, much like people do today.

Robberg technology of southern Africa

In human history, tools were invented in a succession of styles (“technologies” or “industries”), which can indicate the time and place where they were made and what they were used for.

The Robberg is one of southern Africa’s most distinctive and widespread stone tool technologies. Robberg tools – which we found at the Knysna site – are thought to be replaceable components in composite tools, perhaps as barbs set into arrow shafts, used to hunt the migratory herds on the Palaeo-Agulhas Plain.

We see the first appearance of Robberg technology in southern Africa near the peak of the last Ice Age around 26,000 years ago, and people continued producing these tools until around 12,000 years ago, when climate conditions were warmer.




Read more:
What stone tools found in southern tip of Africa tell us about the human story


The particular methods and order of operations that people used to make their tools is something that is taught and learned. If we see specific methods of stone tool production at multiple sites, it indicates that people were sharing ideas with one another.

Robberg occupations at Knysna date to between 21,000 and 15,000 years ago, when sea levels were at their lowest and the coastline far away.

The Robberg tools we recovered were primarily made from rocks that were available close to the site. Most of the tools were made from quartz, which creates very sharp edges but can break unpredictably. Production focused on bladelets, or small elongated tools, which may have been replaceable components in hunting weapons.

Some of the tools were made from a raw material called silcrete. People in South Africa were heat treating this material to improve its quality for tool production as early as 164,000 years ago. The silcrete tools at Knysna were heat treated before being brought to the site. This is only the second documented instance of the use of heat treatment in Robberg technology.

Silcrete is not available near Knysna. Most of the accessible deposits in the area are in the Outeniqua mountains, at least 50 kilometres inland. We’re not sure yet whether people using the Knysna site were travelling to these raw material sources themselves or trading with other groups.

Archaeological sites containing Robberg tools are found in South Africa, Lesotho and Eswatini, indicating a widespread adoption by people across southern Africa. The tools from the Knysna site share many characteristics with those from other sites, which suggests people were sharing information through social networks that may have spanned the entire width of the continent.




Read more:
65,000-year-old ‘stone Swiss Army knives’ show early humans had long-distance social networks


Yet there are other aspects that are unique to the Knysna site. Fewer tools are found in the more recent layers than in deeper layers, suggesting that people were using the site less frequently than they had previously. This may suggest that during the Ice Age the cave was used as a temporary camp rather than as a primary residential site.

Left with questions

Stone tools can only tell us so much. Was Knysna Eastern Heads Cave 1 a temporary camp? If so, what were they coming to the cave for? We need to combine what we learned from the stone tools with other data from the site to answer these questions.




Read more:
Ancient human DNA from a South African rock shelter sheds light on 10,000 years of history


Something we can say with confidence is that we have a very long and rich history as a species, and our innovative and social natures go back a lot further in time than most people realise. Humans living during the last Ice Age had complex technologies to solve their problems, made art and music, connected with people in other communities, and in some places even had pet dogs.

Despite the dramatic differences in the world around us, these Ice Age people were not very different from people living today.

The Conversation

Sara Watson works for the FIeld Museum of Natural History and Indiana State University

file-20250526-56-gfncwo.jpg

Small businesses are an innovation powerhouse. For many, it’s still too hard to raise the funds they need

The federal government wants to boost Australia’s productivity levels – as a matter of national priority. It’s impossible to have that conversation without also talking about innovation.

We can be proud of (and perhaps a little surprised by) some of the Australian innovations that have changed the world – such as the refrigerator, the electric drill, and more recently, the CPAP machine and the technology underpinning Google Maps.

Australia is continuing to drive advancements in machine learning, cybersecurity and green technologies. Innovation isn’t confined to the headquarters of big tech companies and university laboratories.

Small and medium enterprises – those with fewer than 200 employees – are a powerhouse of economic growth in Australia. Collectively, they contribute 56% of Australia’s gross domestic product (GDP) and employ 67% of the workforce.

Our own Reserve Bank has recognised they also have a huge role to play in driving innovation. However, they still face many barriers to accessing funding and investment, which can hamper their ability to do so.


The Federal Government is focussed on improving productivity. In this five-part series, we’ve asked leading experts what that means for the economy, what’s holding us back and their best ideas for reform.


Finding the funds to grow

We all know the saying “it takes money to make money”. Those starting or scaling a business have to invest in the present to generate cash in the future. This could involve buying equipment, renting space, or even investing in needed skills and knowledge.

A small, brand new startup might initially rely on debt (such as personal loans or credit cards) and investments from family and friends (sometimes called “love money”).

Having exhausted these sources, it may still need more funds to grow. Bank loans for businesses are common, quick and easy. But these require regular interest payments, which could slow growth.

Selling stakes

Alternatively, a business may want to look for investors to take out ownership stakes.

This investment can take the form of “private equity”, where ownership stakes are sold through private arrangement to investors. These can range from individual “angel investors” through to huge venture capital and private equity firms managing billions in investments.

It can also take the form of “public equity”, where shares are offered and are then able to be bought and sold by anyone on a public stock exchange such as the Australian Securities Exchange (ASX).

Unfortunately, small and medium-sized companies face hurdles to accessing both kinds.

Person writing on a whiteboard
Companies need access to finance to turn ideas into reality.
Kvalifik/Unsplash

Private investors’ high bar to clear

Research examining the gap in small-scale private equity has found 46% of small and medium-sized firms in Australia would welcome an equity investment – despite saying they were able to acquire debt elsewhere.

They preferred private equity because they also wanted to learn from experienced investors who could help them grow their companies. However, very few small and medium-sized enterprises were able to meet private equity’s investment criteria.

When interviewed, many chief executives and chairs of small private equity firms said their lack of interest in small and medium-sized enterprises came down to cost and difficulty of verifying information about the health and prospects of a business.

To make it easier for investors to compare investments, all public companies are required to disclose their financial information using International Financial Reporting Standards.

In contrast, small private companies can use a simplified set of rules and do not have to share their statements of profit and loss with the general public.

Share markets are costly and complex

Is it possible to list on a stock exchange instead? An initial public offering (IPO) would enable the company to raise funds by selling shares to the public.

Unfortunately, the process of issuing shares on a stock exchange is time-consuming and costly. It requires a team of advisors (accountants, lawyers, and bankers) and filing fees are high.

There are also ongoing costs and obligations associated with being a publicly traded company, including detailed financial reporting.

Last week, the regulator, the Australian Securities and Investments Commission (ASIC), announced new measures to encourage more listings by streamlining the IPO process.

Despite this, many small companies do not meet the listing requirements for the ASX.

These include meeting a profits and assets test and having at least 300 investors (not including family) each with A$2,000.

There is one less well-known alternative – the smaller National Stock Exchange of Australia (NSX), which focuses on early-stage companies. Ideally, this should have been a great alternative for small companies, but it has had limited success. The NSX is now set to be acquired by a Canadian market operator.

Making companies more attractive

Our previous research has highlighted that small and medium-sized businesses should try to make themselves more attractive to private equity companies. This could include improving their financial reporting and using a reputable major auditor.

At their end, private equity companies should cast a wider net and invest a little more time in screening and selecting high-quality smaller companies. That could pay off – if it means they avoid missing out on “the next Google Maps”.

Two people seated in a car, one holding a map open, and the other with a map app on their phone
What we now know as Google Maps began as an Australian startup.
Susan Quin & The Bigger Picture, CC BY

What about the $4 trillion of superannuation?

There are other opportunities we could explore. Australia’s pool of superannuation funds, for example, have begun growing so large they are running out of places to invest.

That’s led to some radical proposals. Ben Thompson, chief executive of Employment Hero, last year proposed big superannuation funds be forced to invest 1% of their cash into start-ups.

Less extreme, regulators could reassess disclosure guidelines for financial providers which may lead funds to prefer more established investments with proven track records.

There is an ongoing debate about whether the Australian Prudential Regulation Authority (APRA), which regulates banks and superannuation, is too cautious. Some believe APRA’s focus on risk management hurts innovation and may result in super funds avoiding startups (which generally have a higher likelihood of failure).

In response, APRA has pointed out the global financial crisis reminded us to be cautious, to ensure financial stability and protect consumers.


The author would like to acknowledge her former doctoral student, the late Dr Bruce Dwyer, who made significant contributions to research discussed in this article. Bruce passed away in a tragic accident earlier this year.

The Conversation

Colette Southam does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.