The Gibraltar Government will carry out a wide consultation as it analyses whether to regulate the use of artificial intelligence [AI].
Chief Minister Fabian Picardo said the use AI had broad implications that required extensive input, particularly in respect of the use of AI in the delivery of public services.
He was responding to questions from GSD MP Roy Clinton during a session of the Gibraltar Parliament in the run-up to Christmas last year.
“This is an issue that has to be dealt with in consultation,” Mr Picardo said.
“There would be a consultation across the digital services that the government provides and across other departments.”
“For example, there are issues in relation to financial services, there are issues in relation to health.”
“There are issues that would affect all of the departments in the context of artificial intelligence.”
“And therefore, I envisage a very wide consultation.”
“The European Union has recently adopted legislation. It's the first entity to have adopted legislation on artificial intelligence.”
“And the United Kingdom is leading on the Bletchley Park principles to try and establish a global standard.”
Mr Picardo said there was no timeframe yet for the planned consultation.
The exchange came against the backdrop of fast-paced change in AI technologies and how countries are seeking to regulate their use.
Around the world, governments are assessing how to harness the potential offered by AI across multiple fears, while tackling too concerns about data protection and fears the technology could be exploited by bad actors.
Last November the UK hosted an AI Safety Summit, the first time that governments, technology industry representatives and academics had gathered to discuss a global regulatory framework for AI.
At the summit, held at Bletchley Park, Prime Minister Rishi Sunak announced the launch of a “global hub” AI Safety Institute to test new AI models before they are released, and led the signing of the Bletchley Declaration by 28 countries, pledging to develop artificial intelligence safely and responsibly.
Mr Sunak said the task of monitoring the risks posed by AI could not be left to tech firms alone.
He warned companies could not be left to “mark their own homework”, against a backdrop of concerns about the technology’s potential capabilities.
“I believe there will be nothing more transformative to the futures of our children and grandchildren than technological advances like AI,” Mr Sunak said at the time.
“We owe it to them to ensure AI develops in a safe and responsible way, gripping the risks it poses early enough in the process.”
A starker warning came from billionaire tech boss Elon Musk in an interview with PA during the summit.
Mr Musk said he believes AI is “one of the biggest threats” to humanity, and that the summit was “timely” given the scale of the threat.
He said it was an “existential risk” because humans for the first time were faced with something “that is going to be far more intelligent than us”.
But despite organising the landmark conference, Mr Sunak faced calls for “significant legislation” in the UK to address concerns about privacy and bias, amid criticism that the UK’s Online Safety Act, which passed into law last year, did not go far enough when it came to AI.
Writing in the Telegraph this week, Jonathan Hall, KC, the UK’s independent reviewer of terrorism legislation, said the UK Government’s Online Safety Act is “unsuited to sophisticated and generative AI”.
“Only human beings can commit terrorism offences, and it is hard to identify a person who could in law be responsible for chatbot-generated statements that encouraged terrorism,” he said.
“Our laws must be capable of deterring the most cynical or reckless online conduct – and that must include reaching behind the curtain to the big tech platforms in the worst cases, using updated terrorism and online safety laws that are fit for the age of AI.”
Experts have previously warned users of ChatGPT and other chatbots to resist sharing private information while using the technology.
Michael Wooldridge, a professor of computer science at Oxford University, said complaining about personal relationships or expressing political views to the AI was “extremely unwise”.
Prof Wooldridge said users should assume any information they type into ChatGPT or similar chatbots is “just going to be fed directly into future versions”, and it was nearly impossible to get data back once in the system.