The OpenAI crisis and the computer apocalypse

The OpenAI crisis and the computer apocalypse
© EPA-EFE/ETIENNE LAURENT   |   An illustration picture shows a mobile phone displaying the logo of Open AI and Microsoft Corporation’s Chat GPT, in Los Angeles, USA, 11 September 2023.

A few days ago, I was working on my laptop, when all of a sudden, the screen turned dark for a few seconds, and the text I was writing disappeared. Before I could figure out what happened – I first though the battery had died, although nothing like that had happened before – everything went back to normal. All the windows were there, the keyboard was functional. I didn’t notice any change, with the exception of a new icon on my taskbar. It looked like a blueish or greenish ribbon bearing the initials PRE, short for “preview”. If you hover over the app icon with your mouse, it displays its identity: Copilot.

Right now, Copilot is the cherry on top of Microsoft’s cake, a fresh product out of the research labs of the American IT giant. In simple terms, it is an AI chatboxan assistant combining Large Langue Models (LLMs) designed to help you use various Microsoft 365 products. For instance, you can tell Copilot to make a SWOT analysis based on plans presented in a videoconference. Or compile a comprehensive financial review based on the reports of recent years. Copilot can just as easily turn a note into a PowerPoint presentation or can monitor a Teams video meet, and then make a summary of the main points under discussion and draft a task list for the participants. The app is integrated as an extension to the Edge browser, so it can also be used for online searches.

Copilot is the expression of Microsoft’s efforts to keep up with the latest trend in computer-based research – artificial intelligence (AI), a race involving other major players such as Google, which has recently launched its own brand of AI named Gemini.  Basically, the current version of Copilot brings together previous chatbox versions launched by Microsoft (see Bing Chat), rebrands them, further develops them and integrates them in the Windows 11 UI (user interface).

Microsoft’s chatbox is the brainchild of the company’s partnership with the rising star of Silicon Valley, OpenAI, where Microsoft owns a 49% share package. Copilot’s design was developed by OpenAI, while Microsoft provided the software with visibility and traction, given that there are over one and a half billion Windows users in the world (although not all are using Windows 11) and over a billion users of Microsoft Office. By way of comparison, ChatGPT, the artificial intelligence software developed last year by OpenAI, has only fourteen million users every day. The launch of the new Copilot software coincided however with surging controversy at OpenAI, while in Microsoft, its partners and the entire line industry were sucked into this whirlpool.

An unexpected sacking

As computer screens on all continents went dark just to come back on seconds later at the end of the Microsoft update, OpenAI was facing its deepest crisis since the company was founded in 2015. The company’s structure is very complex right now, as it branched out from a non-profit entity into a fully lucrative business. In short, the company board decided to sack the OpenAI CEO Sam Altman overnight, just two days before Thanksgiving Day. Not even Microsoft had been made aware of this dismissal, the news reaching the company headquarters after the fact. The OpenAI board produced a temporary explanation in a tweet, showing that “Mr. Altman was not consistently candid in his communications with the board”, which led to the board losing confidence in his ability to continue leading OpenAI.

More details soon surfaced. Apparently, the vote for Altman’s dismissal was not unanimous. If Mr. Altman left, more board members would soon follow. Besides, Sam Altman was rumored to be a skilled manipulator – but many would argue that’s one of the skillsets required of a CEO, isn’t it? Spirits at Microsoft started to surge, as not just Copilot was at stake, but other products included in the business plan. The CEOs of Microsoft and OpenAI had been tied by a relationship of trust cemented on billions of dollars in joint investments. Having said all that, it’s not hard to guess which side the IT giant leaned to. Further weight was added by an open letter signed by a few hundred OpenAI employees, expressing support for the sacked boss. In this context, Sam Altman was reinstated a few days later. The board members were replaced and all the smoke cleared. However, beneath the apparent calm, many questions are still simmering, the hottest of which is the impact of artificial intelligence on human civilization and its chances of survival in the new era of supercomputers.

Computers as the dominant species

OpenAI was founded eight years ago with the specific purpose of protecting mankind against the unchecked development of artificial intelligence. The company set out to carefully probe this field and pledged to release products on the market only after they had undergone thorough testing. This continues to be the official goal of the company, although judging by how much money and power are in play in this sector, doubts have begun to arise concerning its original commitment. Sam Altman’s demise and subsequent reinstatement might be tied to this possible conflict between the “enthusiasts” of artificial intelligence and the “conservatives”, who take priority in its safe usage. The latter fear that, once the so-called General Intelligence is discovered (meaning when computers develop reasoning and become self-aware), it might turn against humanity and could try to destroy it, either because it regards mankind as a threat, an obstacle to fulfilling the objectives it was hardwired to achieve, or because it perceives people as raw material that can be used for production. Artificial intelligence, the “conservative” camp argues, can self-improve at a logarithmic pace. For instance, how long before such AIs start investing in stocks, gain enough capital to fund their own self-sutaining power sources – in a nutshell, how long before they are no longer controlled by man? If Ais could access the entire volumes on psychology, how long would it take them to learn how to manipulate their users, if only to pretend they are harmless while all this time planning a major strike? How hard would it be for AIs to “escape” their confined space and into the world wide web, with the assistance of some naïve user? To say nothing of the latest technological advancements that have made possible such achievements as uploading computer viruses into smartphones by manipulating electromagnetic waves emitted by a computer fan. In such a scenario, Sam Altman would be the “enthusiast” who got disconnected from reality and who, whether aware or not, would sacrifice safety for the sake of money and power.

Others say Sam Altman’s sacking has nothing to do with any ideological dispute, that this was ultimately the product of actions and passions as entirely human as possible.

“Suspicion”, “impressions”, “manipulative actions” and even the thirst for power are easy ways of accounting for the current crisis facing the board of OpenAI, rather than a great clash of principles that suddenly escalated. The common view is that software and artificial intelligence algorithms that are already operational, from IBM’s Deep Blue to ChatGPT and Pilot, are far from replacing man from the overall framework of his activity, even though they are capable of defeating chess grandmasters or working out daily schedules for us – in the end, they are merely a tool.

From recommendation to obligation

The Microsoft-OpenAI dispute created a ripple effect that swept the entire industry. OpenAI is the undisputed leader in this sector: four in five software developers say they are more likely to use OpenAI models rather than others produced by competing companies. During the crisis, however, a few dozen OpenAI clients contacted the company’s rival Anthropic, or representatives of Cohere, another software startup with connections to Google. As the crisis unfolded, the cloud division at Amazon, another supporter of the AI model designed by Anthropic, laid the foundations for cooperation with potential developers leaving OpenAI, who refused Microsoft’s contract offers.

A more profound impact of this scandal, although less spectacular for the time being, can be felt at political level. The authorities have been watching with great concern the evolution of artificial intelligence. Such applications that provide users with the possibility of building their own chatbot – an initiative backed by Sam Altman – are meant to cause additional headaches. In July, the Biden administration urged all stakeholders – including companies such as Google, Meta, Microsoft and OpenAI – to agree to terms so that experts can properly test products before their market release. Four months later, the British government came up with a similar initiative. One of its demands was that AI products must not pose a threat to national security. Coincidence or not, shortly after the OpenAI crisis, the Americans moved from invitation to action. A decree signed by president Joe Biden now compels all major companies operating in this field (companies are ranked in terms of the system requirements for the operation of their AI product) to notify the government and publish the results security tests carried out beforehand. Whereas companies in this sector failed to agree on security, it is expected politicians might step in, The Economist writes. I thought it striking that a liberal magazine such as The Economist should consider the state’s involvement in this matter a good thing.

Another player that recently joined the chorus calling for increased AI regulation was the European Union, which has announced that Member States have reached an agreement on drafting a special law that should regulate the use of artificial intelligence. The document defines artificial intelligence as a software that factors in parameters set by man to “generate content, make predictions, recommendations and decisions with an impact on the environment in which it operates”. One particular concern of the new law is tied to the safety of using artificial intelligence by European citizens, as well as the transparency of resorting to artificial intelligence. Other initiatives similar to the European model (which is expected to be submitted to the European Parliament in early 2024) are in various stages of development in the United States, the United Kingdom and China. 

Although it appears to be a cutting-edge technology that is yet to produce effects, artificial intelligence is already integrated in various fields of activity. So long as computers rely on mere analytical power, humans will always be able to beat machines at Go. But when computers are able to integrate their own experience into the game strategy, things change, making them no match for people. The story is similar with chess, as human players have been losing to machines since the dawn of raw computing power, to say nothing of the current situation, where chess software is self-improving. But the point is that people continued to play chess and go, with computer tournaments becoming more of a laboratory experiment for avid scientists. Sure, some people like to peek at computers’ moves under certain circumstances, but they are worlds apart: on the one hand we have human emotion, talent and fallible calculations, while on the other hand we have the infinitesimal perfection of our silicon counterparts. The two worlds don't mix and, for the time being, at least, they’re both doing fine.

Other opinions
Reactions in Ukraine after the parliamentary election: Romania recovers from TikTok shock

Reactions in Ukraine after the parliamentary election: Romania recovers from TikTok shock

Ukrainian observers have noted that Romania remains pro-European, while the risk remains that, if elected, president Georgescu might turn the country into a state similar to Orban's Hungary.

Dangerous Liaisons: Disinfo in Bulgaria Increasingly Targets Culture and Education

Dangerous Liaisons: Disinfo in Bulgaria Increasingly Targets Culture and Education

A far-right protest in Sofia against a XIX century play directed by John Malkovich brought to the spotlight the disinformation and propaganda campaign targeting Bulgaria’s culture and education.

Are Latvians getting weary of Ukrainian refugees?

Are Latvians getting weary of Ukrainian refugees?

Ukrainian refugees in Latvia have generally been well received, but there were also some displays of hostility. Experts warn a long-term integration program is needed for the refugees.

EBOOK> Razboi si propaganda: O cronologie a conflictului ruso-ucrainean

EBOOK>Razboiul lui Putin cu lumea libera: Propaganda, dezinformare, fake news

More
In Estonian politics, a battle for Narva has begun
In Estonian politics, a battle for Narva has begun

Although Russia is a threat to Estonia, Russian speakers here have voted for pro-Russian politicians in the legislative elections. They now want to win the main Russian-speaking city, Narva, in the local elections.

Călin Georgescu: A
Călin Georgescu: A "back-up" prime minister, an admirer of the legionnaires, hostile to NATO and the EU

Georgescu was once considered a technocrat with a solid international career. When that career ended, he adopted a pro-Russian and anti-Western discourse and expressed his admiration for Ion Antonescu and Corneliu Zelea Codreanu.

Cezar Manu
25 Nov 2024
Poland as a Frontline Target in Russia’s Growing Sabotage Campaign
Poland as a Frontline Target in Russia’s Growing Sabotage Campaign

Against the background of the war in Ukraine, Russia is stepping up hybrid attacks against NATO countries. Being at the forefront of Russia’s campaign, Poland has been facing an increasing number of sabotages.

What could a Trump White House mean for the greater Middle East
What could a Trump White House mean for the greater Middle East

The names floated for the incoming Trump administration suggest that the greater Middle East will remain a focus for Washington. An attention that Iran and Turkey do not like.

Cut out by the West, Russia bet on China and India. On the long run, it’s a poor bet
Cut out by the West, Russia bet on China and India. On the long run, it’s a poor bet

Despite Western sanctions, Russia managed to keep its economy afloat by switching its trade flows to China and India. However, on the long run that makes the Russian economy more vulnerable.

Belarus: has longtime strongman Lukashenko found a successor?
Belarus: has longtime strongman Lukashenko found a successor?

Alexander Lukashenko may be grooming the head of his presidential administration, Dmitry Krutoy, to be his successor. Krutoy is considered to be staunchly pro-Russian.