The Role of Big Tech in Regulating AI: Who Decides if AI is ‘Safe’?

In the most recent developments in the field of Artificial Intelligence (AI), major technology companies such as Google, Microsoft, Meta, and Amazon have significantly increased their investments in AI infrastructure, data centers, and AI startups. Concurrently, the U.S. Department of Homeland Security has established an Artificial Intelligence Safety and Security Board, which includes top executives from these tech companies and startups.

Nevertheless, there are apprehensions regarding whether it is appropriate for Big Tech to possess the authority to regulate AI, considering their vested interests in the industry. This has sparked discussions about who should be responsible for determining the safety and security of AI systems. Some argue that granting Big Tech the power to regulate AI would be equivalent to allowing the fox to guard the hen house. Conversely, others contend that the involvement of these companies is vital for the implementation of AI in critical infrastructure.

The ongoing debate between those who believe that Big Tech seeks to stifle AI competition and those who advocate for AI regulation is evident. It is clear that the ability to influence AI may ultimately be concentrated in the hands of the wealthiest tech companies. This raises legitimate concerns about the transparency and accountability of AI regulation.

Elsewhere, OpenAI is facing legal action from newspapers for copyright infringement stemming from the scraping of their articles for AI training. This is occurring amidst an increase in publishers entering into licensing agreements with OpenAI, raising concerns about the use of AI in content generation.

On the funding front, Paris-based AI startup Holistic, established by former Google DeepMind researchers, has reportedly secured a significant funding round of $200 million. The involvement of former DeepMind scientists has generated enthusiasm, marking a noteworthy advancement in the AI sector.

Furthermore, OpenAI has made its ChatGPT’s Memory feature accessible to all paying subscribers, enabling users to prompt ChatGPT to recollect specific details from prior conversations. This further demonstrates the progress being made in AI capabilities and user interfaces.

In the diplomatic arena, the U.S. and China are scheduled to engage in high-level discussions on AI to address risks and safety concerns. This dialogue is particularly significant against the backdrop of strained relations between the two countries.

Despite the efforts to regulate and harness the potential of AI, concerns persist regarding the transparency and accountability of AI development and deployment. Balancing innovation with the responsible use of AI continues to be a central point of discussion in the realms of technology and policy.

In conclusion, the evolving landscape of AI regulation, funding, and international dialogue emphasizes the critical necessity for deliberate and inclusive decision-making processes. As the AI industry continues to expand and advance, it is essential to address the complex ethical, legal, and technical considerations surrounding the utilization of AI. (Author: Sharon Goldman)