Contact

Responsible and sustainable use of AI

“It is necessary to promote the responsible and sustainable use of artificial intelligence, with due transparency, explainability and auditability”, highlights Alexandre Zavaglia Coelho, specialist in Governance and Ethics of Artificial Intelligence.

A prominent theme at the 2024 edition of Web Summit Rio, discussions about artificial intelligence and its regulation have been linked to discussions about ethics – after all, we are human and we also live in an analog world, where there are responsibilities that we must fulfill.

The development of artificial intelligence that respects ethical and social values ​​requires collaboration between government sectors, technology companies and civil society. Only through a collective effort will it be possible to create robust governance structures that ensure the use of AI for the benefit of all, without putting the privacy or autonomy of individuals at risk.

“We need to create this culture of collaboration to establish guidelines and good practices and, most importantly, to define the technical standards of these principles, bringing effectiveness to the regulation that is being discussed around the world.”

As AI advances rapidly, concerns are emerging about the impact of this technology on labor relations and the maintenance of human rights. Automation and decision-making based on algorithms can profoundly affect sectors such as the labor market, education, and even the justice system. To ensure that AI is used fairly and responsibly, experts advocate the creation of regulations that establish clear limits on the use of algorithms in sensitive situations, such as employee recruitment, credit granting, and judicial decisions. The creation of “risk zones” for the use of AI, suggested by some countries, seeks to protect the population from possible abuses and promote a balance between innovation and security.

Another fundamental aspect in discussions about AI and ethics is the issue of individual autonomy and privacy. AI technologies, such as facial recognition and behavior monitoring, raise questions about mass surveillance and social control, which can compromise individual freedom. In response to these concerns, some companies and governments have been investing in “open source” AI, allowing independent experts to inspect the functioning of algorithms and identify potential biases or security flaws. This movement towards more transparent AI is also driving demand for tools that enable rigorous and continuous auditing of algorithms, ensuring that they are aligned with the principles of justice and equity that society seeks.

This is why experts have highlighted that even technology needs to be accountable at every stage, from data collection to algorithm implementation, since it is algorithmic bias that holds the power to ensure equity and diversity in the development of these systems. Including the topic of ethics in computer science courses and public awareness campaigns are ways to ensure that all stakeholders understand the challenges and opportunities that AI presents in society, paving the way for regulation that ensures the responsible use of this technology.

Technology is the starting point for many of Ryto Public Affairs’ areas of activity. Our experienced team of collaborators and advisors is constantly updated on the world’s biggest events, contributing to the development of innovative solutions and insights for our clients.

Eduardo Shor