AI and Its Ethical Boundaries
As we enter 2025, we carry powerful technology in our hands and pockets. AI now enables voice recognition, synthesized voices, augmented reality, and driverless vehicles. Scandals like Cambridge Analytica have highlighted the ethical risks of AI. People are now more aware of how AI can threaten privacy, fairness, and human autonomy. Big tech companies face growing pressure to act responsibly. How can we ensure AI protects human values?
AI’s Growing Influence
Artificial Intelligence (AI) has become one of the most talked-about technologies today. Its popularity is understandable. Just a decade ago, most scientists worked quietly in labs, improving AI for practical applications.
By 2025, nearly 1.8 billion people worldwide have used AI tools. About 500–600 million use them daily. In business, 78% of organizations now employ AI in at least one function, a rise from 55% just a year ago.
The global AI market is valued at $391 billion and could reach $1.8 trillion by 2030. Growth isn’t limited to one region. India leads in ChatGPT usage with 13.5% of global monthly active users, surpassing the U.S. at 8.9%. In the U.S., Washington, D.C., has the highest per-capita AI use, with Anthropic‘s AI platform Claude usage 3.82 times greater than expected.
A decade ago, we could only dream about AI-powered systems; now, we rely on them for countless daily tasks. Once confined to research labs and futuristic predictions, AI has now become integral to various industries, from healthcare to finance, transportation to entertainment, and even in the devices we use daily.
Tools like voice recognition, driverless cars, and AI-powered recommendations have become familiar features in our lives, even if we don’t always recognise them as AI. As we step into 2025, AI’s influence only grows, creating both excitement and concern.
The speed at which AI has evolved is nothing short of extraordinary, but this rapid development brings with it a host of ethical challenges. We are no longer just asking how AI works or how it can be improved; we are also asking who it serves, what it should or shouldn’t do, and how we can ensure it is used responsibly. As we embrace this transformative technology, it’s essential for us to confront these ethical dilemmas head-on.

4IR: The AI-driven digital revolution transforming industries, economies, and societies worldwide. Infographic by Dinis Guarda
AI in our daily lives
AI’s presence in our daily lives is no longer limited to sci-fi movies or speculative fiction. It has become deeply embedded in the digital tools we use every day. One clear example is Google’s Smart Compose, which helps us write emails by predicting our words. This shows how AI systems have blended into the digital landscape. They help us complete tasks efficiently and make smarter, data-driven decisions.
However, despite the benefits of this technology, its rapid development and widespread use have raised important ethical concerns. AI’s ability to predict behaviour, influence decisions, and control aspects of human interaction creates a growing need for responsible management and transparency.
However, as we move forward, we must ask ourselves: Is AI advancing human progress, or is it contributing to a new set of challenges?
Ethics of AI: A growing concern
AI’s rapid growth has raised fundamental ethical questions about its role in society. While the technology promises to deliver huge benefits, its development must be approached with caution. As we continue to integrate AI into more aspects of our lives, it’s crucial to consider its ethical implications.
We must ask: How do we ensure AI is used responsibly? How do we balance innovation with fairness, privacy, and accountability?
The issue of AI’s ethical boundaries has gained significant attention in recent years. With the rise of data privacy concerns, biases in AI algorithms, and the growing use of AI in high-stakes areas like military applications and law enforcement, we are forced to confront the consequences of unchecked AI development.
A key example of this is Project Maven, a controversial initiative that involved Google working with the Pentagon to develop AI tools for military purposes. The project aimed to use AI to help military personnel identify potential targets from drone footage. This led to protests from over 4,500 Google employees, who argued that the technology could be used for harmful purposes.
As a result, Google chose not to renew its contract with the Pentagon in 2019, a decision that signified a significant moment in the ethics of AI development.
Google’s decision to establish a set of AI principles in response to this protest marked a pivotal moment in the ongoing discussion about AI ethics. These principles included commitments not to develop AI systems that would perpetuate societal biases or be used in harmful ways, such as in weapons systems.
While these commitments were important, they also raised questions about how much we can trust tech companies to regulate their own activities, especially when it comes to the vast amounts of data they collect from us, their users. The question remains: Can we rely on companies to adhere to ethical guidelines when there is significant profit to be made from AI?

The expanding ecosystem of Artificial Intelligence, Infographic by Dinis Guarda
The role of tech companies in AI ethics
We are seeing more tech companies becoming aware of the ethical challenges that come with AI development. Facebook, for example, has started to recognise the potential blind spots in its AI applications. Joaquin Candela, Facebook’s director of applied machine learning, acknowledged in 2018 that the company had been focusing too narrowly on certain applications without fully considering their broader implications.
In response, Facebook has created tools like Fairness Flow, which allows engineers to assess whether their AI models work fairly across different demographic groups. While such efforts are commendable, they still do not address the broader issue of how these systems are shaping our society.
Another critical issue that has emerged in the AI space is the use of facial recognition technology. While it holds potential for improving security and convenience, it has also come under scrutiny for its biases, particularly in relation to how it identifies darker-skinned individuals.
Studies have shown that facial recognition systems are more likely to misidentify women and people of colour, which raises significant concerns about racial and gender discrimination. As a result, there has been increased pressure on companies like Amazon and Microsoft to reconsider how they deploy these technologies, with some calls for a ban on facial recognition in certain contexts.
These issues highlight the importance of transparency and accountability in AI development. It is not enough for companies to simply state that their systems are fair; they must demonstrate how they ensure their AI technologies are free from bias and discrimination.
Collaborating for ethical AI
In response to the growing ethical concerns surrounding AI, a number of initiatives have been launched to develop frameworks and guidelines for responsible AI development. One such initiative is the Partnership on AI, a consortium of leading tech companies, academics, and non-profit organisations working together to ensure that AI is developed in a way that benefits society.
The consortium’s focus is on promoting fairness, accountability, and transparency in AI, and it aims to address some of the biggest ethical challenges facing the industry.
However, while such efforts are important, we must recognise that the drive for growth and innovation can sometimes overshadow ethical concerns. A recent example of this is Microsoft’s contract with the U.S. Immigration and Customs Enforcement (ICE), which involved providing facial recognition technology to help the agency with its operations.
Despite protests from employees, Microsoft continued with the contract, raising questions about the company’s commitment to ethical standards when business interests are at stake.
As AI technology continues to evolve, it is crucial for us to continue developing ethical frameworks and regulatory mechanisms that ensure its responsible use. The Partnership on AI and other similar initiatives are important steps in the right direction, but they must be supported by concrete actions that hold companies accountable for the impact their technologies have on society.

AI + Digital 360: Building literacy, creativity, and critical thinking for the digital age. Infographic by Dinis Guarda
Regulation in AI: A global scenario
As AI grows in various industries, the need for clear rules has become more important. Different countries and regions are taking different approaches to governance. Their choices reflect their cultural values, legal systems, and economic goals. However, everyone shares the same aim: to ensure that AI technologies are developed and used responsibly.
In the European Union, the EU AI Act has become one of the most thorough regulatory efforts so far. It takes a risk-based approach and classifies AI applications into categories like unacceptable risk, high risk, and low risk. Systems that are high-risk, such as healthcare diagnostics or self-driving cars, must adhere to strict standards for transparency, safety, and accountability. This framework aims to protect users while still promoting innovation.
In contrast, the United States has taken a more decentralised approach. Instead of having one national law, the U.S. relies on specific guidelines for different sectors and regulations at the state level. This provides flexibility and encourages fast technological testing, but it also results in inconsistencies and uneven protection for users in various regions.
At the same time, countries like China have set strict rules that focus on government control and national security. China has enacted laws that regulate data use, recommendation algorithms, and deepfake content. This reflects a system where state control is a key part of AI governance. India, on the other hand, is still developing its regulatory framework. It is currently working to find a balance between promoting innovation and ensuring digital sovereignty, especially with its rapidly growing AI user base.
Human-Centric AI
As we look ahead to 2025 and beyond, the ethical questions surrounding AI are only likely to become more complex. AI is already being deployed in critical areas like healthcare, finance, and law enforcement, and its influence will only continue to grow.
With this increased reliance on AI, we must continue to ask ourselves important questions: How do we ensure AI is used fairly? How do we protect our privacy in a world where data is constantly being collected? How can we hold companies accountable when their AI systems cause harm?
The future of AI will require collaboration between governments, corporations, and civil society. We must work together to develop global standards for AI ethics and ensure that the technology is used in ways that align with human values.
This means fostering transparency, accountability, and inclusivity in AI development and ensuring that AI technologies are designed with the needs and well-being of all people in mind.