Prior to the November meeting on artificial intelligence, world leaders are anticipated to deliberate on the potential benefits and hazards of AI.The UK Prime Minister, Rishi Sunak, is expected to attend the summit, which is set to take place at Bletchley Park in the United Kingdom, according to Sky News.
Parliamentarians cautioned that any government legislation should centre on the possible threat AI poses to human life itself.Historically, during the Second World War, individuals such as Alan Turing deciphered Nazi transmissions at Bletchley Park, a converted private home commandeered by the British Secret Intelligence Service in 1938.
The location was essential to the advancement of technology, according to Sky News, since Alan Turing and associates utilised Colossus computers to decode communications exchanged among the Nazis.Greg Clark, a member of parliament and the head of the Science, Innovation, and Technology Committee, expressed his “strong welcome” for the summit.According to Clark, “technology is going to be global.
We should try to study whether we can have an agreement on this. There is some thinking that has to be done about AI safety across all countries. It would be advantageous to include as many voices as possible at this, the first global AI meeting.
The following 12 issues, according to the committee, “must be addressed”:
Existential peril: If artificial intelligence (AI) really is a serious threat to human life, as some academics have warned, then national security must be protected by regulations.
2. prejudice – AI has the power to create new biases or amplify preexisting ones in society.
3. Privacy – AI models may be trained using private data on people or companies.
4. Misrepresentation – Content generated by language models such as ChatGPT may contain errors on an individual’s behaviour, opinions, or character.
5Data: The volume of data required to develop the strongest AI is overwhelming.
6. Processing power – In a similar vein, creating the strongest AI possible calls for a massive processing power.
7. Transparency – AI models frequently find it difficult to articulate the reasoning behind their output or the source of the data.
8. Copyright – Generative models, be they textual, graphic, audio, or video, usually incorporate pre-existing content, which needs to be safeguarded to prevent harm to the creative industries.
9. Liability – The policy should specify who is responsible if AI products are misused for malicious purposes, be it the developers or the providers.
10. Employment: Politicians need to consider how adopting AI would probably affect current employment.
11. Openness – To enable more dependable regulation and encourage transparency and innovation, the computer code underlying AI models might be made publicly available.
12. International coordination – Any regulation’s creation must be an international endeavour, and “as wide a range of countries as possible” must be invited to the November summit.