When John Barber developed the first gas turbine in 1791, he probably didn’t expect his invention to be transformative.
By 1886, after nearly a century of innovation, Karl Benz began commercial production of the first automobile powered by an internal combustion engine.
A decade later, one of his products knocked down Bridget Driscoll near Crystal Palace, making the Croydon resident the first person to be killed by a car in the UK.
Drivers of that day didn’t need a driver’s license — they weren’t introduced until 1903 — and didn’t have to pass a driving test for the next 32 years.
There were Highway Codes (first published in 1931), which included instructions for horse-drawn carriages where drunk driving was an offense, almost 30 years before the same rules applied to motorists.
Seat belts were not made mandatory in Britain until 1983, which halved the death toll.
The gist of it is that the internal combustion engine is a world-changing technology that took decades to develop and even longer to effectively regulate.
Like the man who spends money to walk ahead of a new type of machinery with a red flag, lawmakers are smothered in dust as manufacturers charge forward and tarmac transforms the economy.
artificial intelligence revolution
today artificial intelligence Heralds a similar industrial revolution, but the pace of development can be measured in months and years rather than decades, and its inventors are conscious of the risks.
Doomsday predictions emerged last month, with leading developers warning that “generative” artificial intelligence, capable of generating text and images from prompts and learning as it goes, could bring about “pandemics and nuclear wars alike” social scale” risk.
This makes regulation and oversight crucial, an area Rishi Sunak said he wanted to own, and declared the UK could lead the international discussion.
He will host a “global summit” in the autumn and suggest a British body modeled on the International Atomic Energy Association could follow.
As well as providing favorable coverage of the prime minister’s trip to Washington, the move is part of a wider ambition to position the UK as an artificial intelligence hub and make digital innovation a priority for growth.
He also sees regulation as an opportunity, though what that might look like in practice is less clear.
We do know that, as recently as March, the government intends to follow suit with the car model.
In a white paper, it said it would focus on “the use of AI rather than the technology itself” to “enable responsible AI applications to flourish”.
Instead of passing laws to limit the technology, existing regulators will monitor its use in their fields and work with developers to establish acceptable boundaries.
As a result, the healthcare regulator will oversee its use in diagnostics, rather than creating a central AI agency, Ofcom will remain responsible for policing machine-generated misinformation online, and the Office of Roads and Rail to determine the basis for AI analysis of traffic Facility inspection for safety.
This model has been effectively applied to industries using generative artificial intelligence.
Energy company Octopus is using artificial intelligence tools to reply to more than 40% of customer correspondence, but to comply with data protection laws, it removes all personal data from emails before the AI reads them.
Sunak has appeared to go further in recent weeks, speaking of the need for “guardrails” to guide AI, and there are already concerns that regulators will be left behind.
The Trade Union Congress (TUC) argues that stricter re-employment laws are already needed. AI is already being used by employers to screen job applications, and unions say it can also make hiring and firing decisions in some cases.
They want everyone to have the legal right to appeal to humans, rather than relying on the judgment of machines, not least because biases and stereotypes can become entrenched as AI learns and learns from previous experience.
The UK’s approach stands in stark contrast to the EU, where the European Commission has proposed what it says is the world’s first AI legal framework based on four levels of risk to people’s livelihoods, security and rights.
Government use of AI for social scoring, or in toys that could encourage risky behaviour, would be deemed an “unacceptable risk” and banned.
Areas of least risk include video games and spam filters, while limited risk includes the use of chatbots, as long as you clearly understand that you are talking to a machine.
High-risk areas include education, critical infrastructure, employment (such as resume classification), immigration decision-making, public sector decision-making, and any application in the justice system.
To be legal in these areas, AI tools must meet a number of conditions, including “appropriate human oversight,” and the ability to track and record activity.
Who is the “Godfather of Artificial Intelligence”?
Smartphone camera lens technology could be used to diagnose thousands of patients
When a computer says no, we need to know why
One of the challenges facing developers and users today is that it’s not always clear how AI reaches its conclusions.
When ChatGPT or other language tools are inherently generating plausible human text, there is no way of knowing where it gets its information or inspiration.
It probably doesn’t matter if you ask it to write a limerick or a letter.
If it determines your eligibility for benefits, it’s very eligible. When a computer says no, we need to know why.
Making the UK a home of technology regulation, a sort of digital Switzerland, is an attractive post-Brexit ambition, but the jury is still out on whether it will work.
As the ongoing saga of preserving EU law demonstrates, we may want to make our own regulations, but business logic sometimes dictates that we must follow the larger market.
However, doing nothing is not an option.
AI is advancing so fast, the UK cannot be left by the wayside as it heads towards the horizon.
Uniquely, it could become the first technology to know its destination better than we do, for better or worse.