Could artificial intelligence swallow science whole?
Research depends on both funding and data, and AI threatens to gobble up both to feed tech companies
Science is by far the most useful and systematic way of sorting good ideas from bad ones. Humans who want to understand their world and use that understanding to improve their lives need ways to sort good ideas from ideas that merely sound good. Since the Enlightenment, that’s how most progress has … progressed.
The scientific method of testing ideas in as close to a disinterested way as possible, remaining open to one’s high likelihood of being wrong, is the foundation of a better understanding of the world. And that understanding can then be used to improve health, happiness, productivity and safety.
I know, I know! Not all progress is good. All the marvellous science-driven things we have done these past 300 years with the energy released by burning fossil fuels has distinct downsides. And I know people think mere argument, sentiment, mythology and the retelling of lived experiences are pretty special ways of knowing. I would love to stay and bicker.
Actually I wouldn’t. I would rather test my assertion that science puts these other pastimes in the shade. But … well … I haven’t put in a grant application to do that.
You see, science is expensive. And doing it properly requires great care, and compliance with laws and rules that have been put in place for very good reasons. I would rather it was easier. And sign me up for any initiative to drain the bathwater while still caring for the baby.
But as an advocate for science and its rightly famouns method, I am concerned that among the many games that Artificial Intelligence (AI) is changing, it could simply ingest science as we know it and not even pause to spit out the bones.
Data
For years, I have worried that from now on the most important breakthroughs in understanding human behaviour will be made by machines. Machines owned by companies like Google, Meta, Baidu, OpenAI and Amazon. The usual suspects.
The data we leave as we meander through the Internet, carelessly ticking the “accept all” button lest our progress be slowed even a little, is the new gold. What we do and say online is fodder for ever more inventive algorithms that build models of how best to serve up content, advertising and (fake) news. Other algorithms run tests to see which ways of presenting content are most efficient. If you are here, you know the drill.
Companies can mine those data for new insights into what works. Like toddlers, their curiosity doesn’t often extend beyond immediate self-interest, but the opportunity is there to discover general principles of human attention, preferences, belief formation, discrimination against outgroups etc.
Scientists, by comparison, take a completely different route to knowledge. They come up with an idea to test, develop a research plan, seek funding, obtain permission from their institution’s ethics review board to recruit a specific number of participants, recruit said participants and obtain their consent to do the study. Only then can they gather and analyse the data and write it up.
Data is expensive for scientists. The steps to gathering data are the biggest constraint to testing our ideas. The opposite is true for technology companies.
Facebook doesn’t have to do any of these steps if it wants to test if stories that stoke outrage lead to higher advertising click-through. Amazon does not need ethics review permission to test whether people who purchase running gear are likely to buy powertools. And, should their teams generate more general, dare I say interesting, ideas about human behaviour, there are no guardrails in place. Only if they want to collaborate with scientists, or to publish in academic journals, do they need to bother with ethics and consent.
Data for profit is far easier to gather and to keep than data for the public good.
Perhaps the greatest potential discoveries about human behaviour lie buried in the growing slag-heaps of data, unavailable to science because they have not been gathered ethically and the data are commercially valuable. Tech companies will continue using brute force to mine datasets for an extra 90 seconds of user attention per day. The same data sets – in public hands – could cough up the secrets to reversing the crisis in anxiety among young adults.
Money
Behavioral science being outgunned by tech companies is a money problem. Social media and user experience are big business, so they can fund gathering and storing all that data, and paying analysts and AI designers to extract the valuable insights from the data.
Then, today, my AI-expert colleague Toby Walsh coughs up this gem in an article in The Conversation.
Nearly US$1 billion is being invested in artificial intelligence every day by companies like Google, Microsoft, Meta and Amazon. That’s around one quarter of the world’s total research and development (R&D) budget].
I had never seen those numbers, but they certainly grabbed my attention. Those billions are the usual suspects and their shareholders putting the big bucks into finding new ways to make profitable AI products. It’s not as if they are siphoning off public funding for science.
Well, not directly. But they are ensuring that new discoveries in AI are made in-house, and their profitable yeilds are harvested in-house. Many of these discoveries will deliver great benefits to individuals and the economy. But some, like social media, will impose new kinds of costs in terms of draining user attention and weaponising misinformation. So at the very least that US$1 billion a day is not about improving human understanding or lives.
It would be easy to ignore this, perhaps to celebrate that AI is moving ahead so fast, if it weren’t happening in the context of what looks like the USA’s state capture by plutocrats, including major players in AI.
The US$500 billion promised by Donald Trump to a consortium of tech companies for the Stargate initiative seems, on the surface, like good news. But at the same time the DOGE initiative, headed by Elon Musk, is slashing-and-burning its way through the US Government, including the Education Department, and the two biggest public-interest funding agencies, the National Science Foundation and the National Institutes of Health.
However you view the cause-and-effect, it is clear that in America a tectonic shift in spending is favouring the companies that gather your data and store it with very little meaningful consent from you, and away from the places where funding is allocated in the public interest, with citizen oversight, and careful ethical consideration. That funding is more competitive than the Bay Area startup ecosystem, ensuring the very best ideas get funded. That’s a refreshing contrast with the patronage and sloshing largesse of in-house corporate investment and government-corporate “partnership”.
How will the AI of the future deliver maximum benefit for humanity – as many claim – if it is suffering from indigestion having gobbled up the only institution known to be able to separate nutritious ideas from crap.