Why the EU’s Artificial Intelligence Act could harm innovation

The EU’s proposed Artificial Intelligence Act plans to restrict open-source AI. But that will come at a cost for advancement and innovation, argues Nitish Mutha of Genie AI

The proposed – and still debated – Artificial Intelligence Act (AIA) from the EU touches upon the regulation of open-source AI. But enforcing strict restrictions on the sharing and distribution of open-source general-purpose AI (GPAI) is a completely retrograde step. It is like rewinding the world back by 30 years.

Open-source culture is the only reason why mankind was able to progress technology at such a light speed. Only recently AI researchers were able to embrace sharing their code for more transparency and verification but putting constraints on this movement will damage the cultural progress the scientific community has made.

‘Regulations are good and should be welcomed, but not at the cost of creativity and scientific advancements’

It takes a lot of energy and effort to cause a cultural shift in the community – so it will be sad and demoralising to shunt this. The whole Artificial Intelligence Act needs to be considered very carefully, and its proposed changes have sent ripples through the open source AI and technology community.

‘Chilling effect’ of Artificial Intelligence Act

Counteractive objectives

Two objectives from the act’s proposed regulatory framework stand out in particular:

  • ensure legal certainty to facilitate investment and innovation in AI’ and
  • facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation’

Introducing regulations on GPAI seems to counteract these statements. GPAI thrives on innovation and knowledge sharing without fear of damaging legal repercussions and costs. So, rather than create a safe market withstanding fragmentation, what could actually happen is a range of stringent legal regulations that both inhibit open-source development and further monopolise AI’s development with the large tech companies.

This is more likely to create a market that is less open, and, therefore, one in which it is harder to gauge whether AI applications are “lawful, safe and trustworthy”. Of course, this is all counterproductive to GPAI. Instead, the disparity that could be generated by such impositions will place greater power in the hands of the monopolists, and this is a growing and worrying concern.

But… we need regulations?

It’s also important to acknowledge those who may see this backlash to the changes as an attempt for companies to weasel their way out of regulations. Surely regulations are needed to prevent dangerous malpractice. Without regulations, won’t AI fall into the wrong hands?

It’s a valid concern, and, yes, of course we need regulations (as outlined below). But this regulation should be created on an application basis, not as a broad brushstroke for all models. Each model should be valued on whether it is deemed potentially harmful and regulated accordingly, rather than targeting open source at its source and thereby restricting creativity.

This is an intricate, complex, and multifaceted act to implement. And even those who agree on the whole still disagree on certain areas. But a key sticking point is that the public nature of GPAI allows people access to it. This open collaborative approach is the fundamental reason progress is achieved, transparency is created, and technology is developed for the benefit of society, collectively and individually, over commercial gains.

Freedom of sharing

Open-source licences like MIT are designed to share knowledge and ideas, not to sell finished and tested products. Hence, they should not be treated the same. It’s true the right balance of regulation is needed. This is especially to improve the reliability and transparency of how these AI models have been built, what types of data have been used to train them and if there are any known limitations – but this can’t be at the cost of risking freedom of sharing knowledge.

Right now, it feels like the Artificial Intelligence Act is targeted towards the creators for openly sharing knowledge and ideas. The design of the regulation should be tailored towards people who use the open-source software to be more cautious and conduct their own research and trials before launching to a wide audience. This can unravel the bad actors who want to use the creators’ work in commercial projects without investing in any extra research and quality controls on their part.

The end developer should actually be held accountable and responsible for carefully examining and conducting thorough quality checks before serving their users. These are the people who – in the end – will be commercially benefitting from the open-source projects. But, in its current format, the framework does not clearly intend to do this. The core motto of open source is to share knowledge and experience without any commercial gains.

Regulate openly to innovate openly

Adding stringent legal liabilities to the developers and researchers of open source GPAI will just limit technical growth and innovation. It will discourage developers from sharing their ideas and learning, further preventing new start-ups or aspiring individuals from being able to access cutting-edge technology. It will deny them from learning and being inspired by what others have learned and built their own.

This is not how technology and engineering work in the modern world. Sharing and building on top of others is at the core of how we develop technical products and services – and this must be maintained. Regulations are good and should be welcomed, but not at the cost of creativity and scientific advancements – rather, they should be applied on the application front to ensure responsible outcomes. In the face of changes to the AIA, one thing is clear – open-source culture must be cherished.

Nitish Mutha is co-founder and CTO of Genie AI

Related:

How AI could be a game-changer for data privacyAI offers multiple benefits to businesses, but it also poses data privacy risks

The most valuable use cases for artificial intelligence in web applicationsThis article will explore how artificial intelligence in web applications has been helping organisations drive value

Can artificial intelligence spot spam quicker than humans? Artificial intelligence can indeed spot spam quicker than humans, but there are limits, says Martin Wilhelm from GMX

Leave a comment