Artificial intelligence (AI) has increasingly become part of everyday life over the past decade.
It is used for everything from personalising social media feeds to powering medical breakthroughs.
But as big tech firms and governments vie to be at the forefront of AI's development, critics have expressed caution over its potential misuse, ethical complexities and environmental impact.
AI allows computers to learn and solve problems in ways that can seem human.
Computers cannot think, empathise or reason.
However, scientists have developed systems that can perform tasks which usually require human intelligence, trying to replicate how people acquire and use knowledge.
AI programmes can process large amounts of data, identify patterns and follow detailed instructions about what to do with that information.
Generative AI is used to create new content which can feel like it has been made by a human.
It does this by learning from vast quantities of existing data such as online text and images.
ChatGPT and Chinese rival DeepSeek's chatbot are two widely-used generative AI tools. Midjourney can create images from simple text prompts.
So-called chatbots such as Google's Gemini or Meta AI can hold text conversations with users.
This has added to concerns about the use of AI in schools and workplaces, where it is increasingly used to help summarise texts, write emails or essays and solve bugs in code.
There are worries about students using AI technology to "cheat" on assignments, or employees "smuggling" it into work.
Writers, musicians and artists have also pushed back against the technology, accusing AI developers of using their work to train systems without consent or compensation.