AI is the language of skill. Should we believe what it says?

But while the fluency of GPT-3 has surprised many observers, the large-language-model approach has also drawn significant criticism in recent years. Some skeptics argue that the software is only capable of blind imitation – that it imitates the syntactic pattern of human language but is unable to form its own ideas or make complex decisions, a fundamental limitation that would prevent the LLM system from ever maturing into something similar. Human intelligence. For these critics, GPT-3 is the latest shiny object in AI Hype’s long history, channeling research dollars and focusing on what could eventually result in other promising approaches to maturing. Other critics believe that software like GPT-3 will forever compromise the data bias and propaganda and misinformation trained on it, which means it will always be irresponsible to use it for anything other than parlor strategy.

Wherever you get into this debate, the recent pace of improvement in large language models makes it hard to imagine that they will not be commercially established in the coming years. And it raises the question of exactly how they are – and in that regard, the other advances in AI should be made public. With the rise of Facebook and Google, we have seen how dominance in a new realm of technology can quickly lead to amazing power over society, and the ultimate impact of AI threatens to be even more transformative than social media. What is the right kind of company to create and own something of such scale and ambition with such promise and such potential for abuse?

Or should we build it at all?

Origin of OpenAI The date was July 2015, when a small group of tech-savvy people gathered for a private dinner at the Rosewood Hotel on Sand Hill Road, the iconic heart of Silicon Valley. The dinner was between two recent developments in the world of technology, one positive and the other problematic. On the one hand, radical advances in computational power – and some new advances in the design of neural nets – created a clear sense of excitement in machine learning; There was an idea that the long “AI winter”, in which the field had failed to live up to its initial accent for decades, had finally begun to melt. A team from the University of Toronto trained a program called Alexnet to identify the class of objects in photographs (dogs, castles, tractors, tables) whose accuracy is far greater than any neural net previously achieved. Google quickly jumped on the bandwagon to hire Alexnet makers, simultaneously acquiring Deepmind and launching its own venture called Google Brain. Mainstream adoption of intelligent assistants like Siri and Alex has proven that even scripted agents can be a breakout consumer hit.

But at the same time, there was a seismic shift in public attitudes toward Big Tech, with once popular companies like Google or Facebook being criticized for their near-monopoly power, widening their conspiracy theories and their relentless sifting to get our attention. Towards algorithmic feeds. Long-term fears about the dangers of artificial intelligence appeared on op-ed pages and on the TED stage. Nick Bostrom of Oxford University has published his book “Super Intelligence”, which introduces a range where advanced AI can deviate from the interests of humanity and have potentially catastrophic consequences. In late 2014, Stephen Hawking announced to the BBC that “the development of a fully artificial intelligence could bring about the end of the human race.” At this point alone, algorithms can’t just sow the seeds of polarization or sell our attention to the highest bidder – they can ultimately destroy humanity. And again, these would mean that you have to spend for these processes.

The July night dinner on Sand Hill Road was nothing short of ambitious: finding the best way to get AI research to the most positive outcome possible, avoiding both the confusing short-term negative consequences of the Web 2.0 era. Threat to long-term existence. From that dinner, a new concept began to take shape – which would soon become a full-time obsession for Y Combinator’s Sam Altman and Greg Brockman, who recently left Stripe. Interestingly, the concept was not as technical as it was organizational: if AI was to be revealed to the world in a secure and efficient way, it would require innovation and motivation and stakeholder involvement at the management level. The technical path of the field, which is called artificial intelligence, or AGI, was not yet clear to the group. But Bostrom and Hawking’s disturbing predictions convinced them that the acquisition of human-like intelligence by AIs would combine a surprising amount of energy and moral burden, that anyone would eventually be able to invent and control them.

In December 2015, the group announced the formation of a new entity called OpenAI. Altman signed on to be the chief executive of the enterprise, with Brockman overseeing technology; Another participant at the dinner, Alexana co-creator Ilya Sutskever, was hired by Google to head research. (Elon Musk, who also attended the dinner, joined the board of directors, but left in 2018.) In a blog post, Brockman and Sutcliffe highlighted their ambition opportunities: “OpenAI is a non-profit artificial-intelligence research company,” they wrote. “Our goal is to advance digital intelligence in a way that benefits humanity as a whole, without being constrained by the need to generate financial returns,” he said. And, in the spirit of independence, to distribute as widely and evenly as possible. “

The founders of OpenAI will publish a public charter three years later, outlining the core principles of the new organization. The document was easily interpreted from day one of Google’s “don’t be evil” slogan, a recognition that maximizing the social benefits of new technology – and minimizing the disadvantages – was not always the case. A simple calculation. When Google and Facebook came to dominate the world through closed-source algorithms and proprietary networks, the founders of OpenAI promised to go the other way by freely sharing new research and code with the world.

Leave a Comment

Your email address will not be published.