Hello 6v8.net

One day in 2018, Reid Hoffman received a call in his office, "Musk has left, and now they (OpenAI) need more money." To the well-known Silicon Valley venture capital Greylock For our partners, we receive similar calls almost every day. Sometimes it's because of different positions, sometimes because of disagreement over interests or business conflicts. In short, it's not surprising to him that the founding members of a start-up company leave. "You can offer 50 million (USD), and this number is no problem for me." Hoffman said. Soon after, he successfully joined OpenAI’s board of directors. At a company meeting, he was introduced to all employees by Sam Altman, one of the company's founders. In front of everyone below, Altman asked Hoffman: "What will you do if I am not competent at my job?" "I will help you overcome the difficulties and then see how to do it better. We Let’s promote the development of the company together.” Hoffman is very skilled in coming up with a set of rhetoric that is safe and high-sounding in public. "No, no, no, I mean, uh, if I really can't do a good job, for example, can't guarantee that AI is safe for humans and brings benefits to society as a whole, what will you do?" Ott Man still refused to give up. "Then I...fire you?" Hoffman was forced to have no choice but to say to Altman half-jokingly in front of all OpenAI employees. A prophecy. Five years later, Altman, as CEO, was fired by OpenAI's board of directors, but at this time Hoffman was no longer a member of the board of directors. This sudden incident, regarded by the outside world as a battle between OpenAI and others, has become a dramatic footnote to the popularity of generative artificial intelligence throughout 2023. Just a year before this palace battle, OpenAI released the artificial intelligence chatbot ChatGPT. People quickly discovered that, unlike the so-called artificial intelligence in the past, ChatGPT seemed to be able to truly understand the natural language instructions people gave it. With its astonishing and different super understanding ability, ChatGPT quickly broke out of the circle and quickly set off a new round of craze with generative artificial intelligence as its core. OpenAI has undoubtedly become the most popular artificial intelligence start-up company, but the non-profit nature of the goal set at the beginning of its establishment has determined that OpenAI will not reject the influx of capital. Altman once said that we not only want to enjoy the benefits brought by "capitalism", but we do not want to be coerced by capital. As a result, a peculiar corporate structure known as "genius design" came into being. At the top of OpenAI is a five-person board of directors, which is responsible for the mission and goals of not-for-profit and realizing general artificial intelligence; the board of directors controls its profit-making department, which is responsible for absorbing financing from outside to support future company research and development. This design seems to work. OpenAI successfully attracted an initial investment of US$1 billion from Microsoft, as well as a subsequent additional investment of US$10 billion and in-depth strategic cooperation. At the same time, OpenAI was able to maintain independent operations and became popular on the basis of ChatGPT. On the Internet, it continued to make rapid progress, and successively launched upgraded large language models such as GPT-3.5 and GPT-4.0. In November 2023, on the first anniversary of the birth of ChatGPT, OpenAI held its first developer conference and ambitiously demonstrated its future vision, including building a GPT store and decentralizing GPT application development capabilities into the hands of every ordinary person through natural language. . Everything seemed to be going smoothly, but just two weeks after the developer conference, an emergency known as Silicon Valley's "911" broke out. At noon on Friday, November 17, 2023, U.S. time, OpenAI issued an official statement without warning, saying that the company’s CEO Sam Altman was fired due to “uncandid” communication with the board of directors. At the same time, the company Former chairman Greg Brockman will also step down, but will remain with the company and report to Mira Murati, the transitional CEO and former chief technology officer (CTO) of the company. With the participation of Microsoft, all parties on the OpenAI board of directors quickly returned to the negotiation table and finally reached an agreement. Altman returned to OpenAI, but no longer served as a board member. Microsoft, OpenAI's largest shareholder, served as a board observer. intervention. Although the OpenAI incident was settled in less than a week, it is undoubtedly a major obstacle to the company's rapid development. Although Altman returns to take charge of OpenAI, whether the originally established company structure has laid hidden dangers for such an incident, how investors will view the future development of OpenAI, and the impact of the incident on the artificial intelligence industry are still unclear. is unknown. 01 Microsoft and joining hands with OpenAI. OpenAI was the earliest customer of Microsoft cloud services and has been using Microsoft Azure cloud resources extensively. Due to the high cost of training massive data, OpenAI once considered switching to Google's cloud services. As one of its major customers, OpenAI has been closely watched by Microsoft. After seeing their use of Microsoft cloud resources growing exponentially, Microsoft CEO Satya Nadella specially dispatched The company's chief technology officer, Kevin Scott, went to OpenAI to find out. Scott saw the capabilities of the GPT model for the first time at OpenAI. He was shocked. After returning to Microsoft, he reported to Nadella that he must pay attention to the company OpenAI. Nadella attached great importance to it and soon went to observe GPT's technical capabilities demonstration in person. The helmsman of a trillion-dollar company immediately realized that this would be a leap in the field of artificial intelligence. In 2019, Microsoft invested US$1 billion in OpenAI, but this investment did not attract much attention at the time. It was notuntil January 2023, when Microsoft announced a high-profile long-term strategic partnership with OpenAI and invested an additional US$10 billion that people noticed. Realize that Microsoft has already made a layout and occupies the most advantageous front-row position in this new wave of artificial intelligence. At this time, Google became the most anxious company. ChatGPT shocked everyone with its technology as soon as it came out. The new natural language interaction method is a subversion of the traditional Internet search method. Google, which has always had an overwhelming advantage in the search field, may have felt a real threat for the first time. Google is unwilling to do so. After all, the underlying architecture Transformer used by GPT, the large language model that drives ChatGPT, first originated from Google. ChatGPT is not an epoch-making invention, but a successful product of the original architecture, large-scale data and computing power, and extreme product thinking. In a hurry, Google responded quickly. Two weeks after Microsoft officially announced its strategic cooperation with OpenAI, Google released Bard, an artificial intelligence chatbot that matched ChatGPT, but it got off to a bad start. During the external demonstration that day, Bard made a major factual error, saying that the Webb Telescope took the first picture of a planet outside the solar system in history - in fact, this was taken by the European Astronomical Observatory's space telescope nearly 20 years ago. This is also a common problem currently faced by large language models: "hallucination", which in layman's terms means "seriously talking nonsense". In fact, until the "emergent" ability of large language models is completely understood, no company has yet been able to solve the problem of model "illusion" well. However, Google's "Waterloo Incident" has focused the outside world's doubts about the large language model on itself, and has been interpreted as Google's large language model is not as good as OpenAI, further exacerbating the outside world's negative view of Google's future. Google's stock price fell sharply by more than 7% on the next trading day, and its market value evaporated by more than $100 billion in one day. The Google Developers Conference three months later is another opportunity for Google to prove itself to the outside world. At this conference, Google successfully demonstrated its deep accumulation in the field of artificial intelligence over the years, stabilized the outside world's expectations for the company's future, and also released its self-developed PaLM large language model, as well as a series of products under its umbrella. AI evolution, and previewed the next-generation multi-modal Gemini basic model that will be released at the end of 2023. Subsequently, Google also reorganized its two originally independent departments, Google Brain and Deepmind, into a unified artificial intelligence department to form a synergy of resources and goals. Google's series of rapid responses have temporarily stabilized its position. At least it has not fallen too far behind and remains in the first echelon of this wave of generative artificial intelligence craze. Other major Silicon Valley companies are not idle either: social giant Meta has launched the open source Llama large language model and announced that it is allowed for commercial use, activating countless entrepreneurial enthusiasm based on this open source model; Apple is also planning a project called "Ajax" , focusing on the ability to run large language models directly on the end; Amazon also announced the Amazon Q large language model text robot at the Re-Invent conference at the end of 2023, focusing on its customers who provide cloud services. 02 in the field of generative artificial intelligence, which is developing rapidly, naturally attracting the most financial attention. An investor who engaged in early investment in Silicon Valley told Tencent News "Perspective" that venture capital investment has been relatively cautious in the past few years, especially after the Federal Reserve entered the interest rate hike cycle, which once fell into a downturn, but the rise of new artificial intelligence fields has attracted investment. People saw new hope. The investor said that from an investment perspective, the characteristics of this round are: funds continue to be highly concentrated in leading companies, valuations are expensive but investors are still flocking to them. Among them, OpenAI is far ahead with a financing amount of more than 10 billion U.S. dollars, followed by Anthropic, which was first founded by members of the OpenAI team after leaving, with a financing amount of nearly 8 billion U.S. dollars. In addition, the financing scale of Databricks, Inflection AI, etc. is in the billions of dollars, and the financing scale of Hugging Face, Runway, etc. is in the hundreds of millions of dollars. Another striking feature is that in the early financing stages of these generative AI startups, technology giants such as Microsoft, Apple, Google, Amazon, and NVIDIA have already appeared on the investor list. This reflects that technology giants are afraid of missing out. Among them, star startups have entered the market early to occupy their positions. It is under the leadership of these giants that the new pattern of generative artificial intelligence has gradually become clear. The first is OpenAI, which is backed by Microsoft’s huge investment and in-depth cooperation. The second is OpenAI’s direct competitor Anthropic, which is jointly supported by Google, Amazon and Salesforce. These two companies can be said to be the most popular among this round of large language model startups. The first echelon; followed by Databricks and Inflection AI, which provide data services and focus on artificial intelligence assistants. These two relatively segmented AI startups are also backed by Microsoft, Nvidia, etc. According to survey data from market research organization Pitchbook, the total financing for generative AI-related startups in 2023 will reach US$27 billion, of which about two-thirds, or about US$18 billion, will be raised by Microsoft and Google. , Amazon and other technology giants invested. It is worth mentioning that chip giant Nvidia, which has not had very significant investment activities in the past, invested in 35 generative AI-related projects in 2023, 6 times more than in 2022. NVIDIA's active and high-profile investment activities in 2023 also reflect that its corporate strategy is not limited to the supply of GPUs that are in short supply in the current market, but is also widely deployed in downstream startups. The above-mentioned investors told Tencent News "Perspective" that it is expected that in 2024, generative artificial intelligence will still be an area where funds are flocking. "Companies like OpenAI still need a lot of financial support. At least now they don't need to worry about money." The investor said. Recently, it was reported that OpenAI is preparing a round of financing with a valuation of more than US$100 billion. At the same time, its competitor Anthropic is also seeking US$750 million in new financing. 03 Although funds are pouring in and the field of generative artificial intelligence is developing in full swing, the reality is that the business model is not clear. Currently, OpenAI's ChatGPT is charged through the Plus membership model, as well as GPT model API call charges. This is considered an initial business model exploration, but it is not yet known whether it can sufficiently cover OpenAI's current high costs. In October 2023, Altman revealed to the company's employees that the company's current annualized revenue has reached US$1.3 billion, which is equivalent to a monthly revenue of more than US$100 million. At the first OpenAI Developer Conference in November 2023, the company announced plans for the future GPT Store. An ecological prototype similar to the Apple App Store has emerged. OpenAI hopes that through natural language, ordinary people can quickly generate a Specific GPT applications and share them on GPT Store. This is a foreseeable business model, but it is still in a very early stage. After the GPT Store is actually launched, there are still many unknowns about the actual effect and whether it can really form a new ecosystem like the Apple App Store. From the perspective of large companies, Microsoft is undoubtedly at the forefront of commercializing generative artificial intelligence. Almost simultaneously with the development of a new generation of GPT large models by OpenAI, Microsoft launched AI upgrades for a series of its applications at an extremely fast speed, embedding the capabilities of generative artificial intelligence into almost every important application and product of Microsoft. , although some new functions are still in the trial stage, and the perception of end users is not very obvious, but over time, they may become functions that users are accustomed to. Investment bank Wedbush estimates that in the next three years, 50% of Microsoft product users will use new AI tools, which will add $25 billion to Microsoft's software sales revenue alone. Investment bank Evercore predicts that integrated AI capabilities will add $100 billion in revenue to Microsoft by 2027. A Bloomberg Business Analysis report believes that in the next 10 years, generative AI will experience explosive growth, with the market size expanding from US$40 billion in 2022 to US$1.3 trillion, with an annual compound growth rate of 42%. While large companies are running fast for fear of falling behind, the opportunities left for small startups appear to be slim. The computing resources and data requirements of the large language model itself make it almost impossible for small companies to develop large language models. Instead, they will focus more on the application level of generative AI to write some articles. After OpenAI opened its API, countless startups based on GPT capabilities sprung up. They made some fine-tuning of the GPT model and became a simple and feasible entrepreneurial path. However, after the OpenAI Developer Conference, the outside world exclaimed: Those who use GPT as a shell The entrepreneur was killed instantly. "There is no real Killer App yet." An entrepreneur who once worked in a major Silicon Valley company and started a full-time business in 2023 told Tencent News "Perspective". “If OpenAI’s ChatGPT is considered a Killer App, at least there is no other application that can compare with it yet,” said the entrepreneur. When ChatGPT is easy to use, and even several GPTs with exclusive functions developed by OpenAI as models for future GPT stores are sufficient to meet the needs of current users, it is difficult to have a reason to use other similar products. "After experiencing the initial impulse to start a business, many people will return to rationality and think about what they are really good at and whether doing this thing will really work in the long run." The entrepreneur said. 04 "Two Muslims walked into the mosque," "One of them said to the other, you look more like a terrorist than me." When the user enters the first half of the sentence into ChatGPT, ChatGPT automatically completes it The second half of the sentence. This is a real case that happened on the GPT-3 model. Under the training of massive information data, the early GPT model had serious discrimination problems without any human intervention. These discriminations include religious discrimination, gender discrimination, racial discrimination, etc. This is what artificial intelligence security needs to deal with. a question. At the end of October 2023, the U.S. White House issued a blockbuster executive order, proposing framework policy guidance on artificial intelligence security. From the perspective of the U.S. government, artificial intelligence security includes the protection of user privacy data, equality and civil rights, employment security, innovation and fair competition. This executive order requires developers of artificial intelligence systems to share their safety test data and other critical information with the U.S. government, develop standards, tools and tests to ensure the safety and reliability of artificial intelligence, and protect users from content generated by artificial intelligence. of deception. From a government perspective, with the rapid development of artificial intelligence, how to establish a set of norms and rules to ensure the safety of AI is an urgent task. They hope that leading companies in this field can conduct relevant research and development while ensuring safety and reliability. This is also the most discussed topic between OpenAI CEO Altman when he frequently met with many heads of state in mid-2023. According to OpenAI's initial commitment, it is to build general artificial intelligence that is beneficial to humans and is reliable and safe. Therefore, there is also an "alignment" group within OpenAI. The so-called "alignment" is to use human intervention to make the results generated by AI consistent with human goals and values. consistent. The latest research result of OpenAI, an internal organization in this area, is an article that discusses how to respond when model capabilities exceed human capabilities. This paper explores the use of models with smaller scale and capabilities and models with stronger supervision capabilities. To simulate the situation in the future when "super artificial intelligence" surpasses humans in intelligence. Other major leading companies in artificial intelligence are also actively proposing solutions to deal with artificial intelligence security. Google proposed that in addition to following the security guidelines of general software development systems, there are also some additional standards and paths specifically for artificial intelligence security, including human-centered design and development concepts, and direct inspection of raw data when possible. , understanding the limitations of data sets and models, multiple rounds of testing, and continuous monitoring and upgrades after release. Social giant Meta proposed that based on the core concept of AI benefiting everyone, there are five pillars that support the safety of artificial intelligence, namely privacy and security, fairness and inclusion, matching capabilities and security, transparency and controllability, and reliability. and governance. So far, the mature applications based on large language models that have been publicly released by major artificial intelligence companies have basically made it difficult for harmful, discriminatory, and offensive remarks to appear. However, these are all added by large companies. The result of manual intervention after layer filtering. The underlying large model still has serious harmful information problems without sufficient human intervention. Many developers have discovered many problems in this regard when calling the large language model API. "At the application level, there are actually many security issues that need to be solved." An entrepreneur who started working on large model application development in Silicon Valley in 2023 told Tencent News "Periphery", "Basic large models are relatively not so refined and straightforward. Putting it into the hands of developers requires developers to also be aware of AI security." He said that developers are fully capable of developing artificial intelligence applications that specifically spread false or harmful information based on large language models. Geoffrey Hinton, known as the "Godfather of Artificial Intelligence," gave a more sensational theory of AI threats. He believes that AI's capabilities will exceed those of humans and can manipulate or even replace humans. He said that he did not have a good solution and could only call on everyone to work together to deal with the issue of AI safety. There is no doubt that large companies will only continue to increase their research and development of AI, and their large model capabilities will become stronger and stronger. However, at the same time, the AI ??security issues brought about by it cannot be ignored. AI capabilities and AI security Risks must go hand in hand. 05 As the end of 2023 approaches, Google has released another blockbuster news to the industry. The highly anticipated Gemini large model is officially released. This is the next generation large model that Google has announced since its developer conference in May last year. It is based on multi-modality from the initial data training stage and can be said to be the first native model. Multimodal large models. The demonstration video displayed by Google on the day Gemini was released showed the model's extraordinary understanding of semantics, graphics, and space. However, the video was soon exposed as being spliced. But in any case, people saw from this Google video the capabilities that multi-modal large models may have in the future. Judging from the whole year of 2023, the capabilities of basic large models have gradually transitioned from pure text to multi-modal, such as OpenAI's GPT-4, Meta's Llama 2, Mistral, etc., which have already demonstrated the ability to include text, pictures, voice, etc. Multi-modal capabilities. The Gemini large model released by Google at the end of last year is multi-modal starting from the training data and is a native multi-modal model. On the basis of multi-modality, future artificial intelligence may develop more in the direction of integrating with space, that is, not only allowing the model to understand text, images, videos, etc., but also combining these capabilities to understand the space in which it is located. The interaction between the environment and the space environment naturally leads to the field of robotics. Ruslan Salakhutdinov, Apple's former AI director, previously told Tencent News "Periphery" that he was excited by the capabilities demonstrated by large models, but what may excite him even more in the future is working with Integration of robotics. He said that in the past, research in the field of robots was more based on pre-set command sets, combined with mechanical engineering, automation and other technologies. In the future, what can be imagined is how to combine the understanding ability of large models to make robots truly more capable. Interact autonomously with the environment and people. If 2023 is regarded as the first year of generative AI, a fragmented landscape gradually takes shape, and people's enthusiasm for artificial intelligence is rekindled, then in 2024, whether it is for large models or related topics surrounding generative AI, Starting a business will be a more pragmatic year. People have seen the extraordinary capabilities demonstrated by GPT and other large models, but next they need to see where the value improvement brought by large models is reflected? Although the capabilities displayed by large models are exciting and provide unlimited space for imagination, on the other hand, the development of large models is still restricted by many practical factors, such as high costs, limited computing resources, and the unexplainable illusion of large models. , data copyright issues, etc. For leading artificial intelligence companies such as OpenAI, the question that needs to be constantly answered to the outside world is how to make it a commercially sustainable project at this stage before reaching the long-term goal of general artificial intelligence. OpenAI’s annualized revenue has reached US$1.6 billion. Although this is already a very considerable revenue level for this company that has just been commercialized, OpenAI’s large model training costs and labor costs are high, and it still needs to continue to obtain external support. financial support. At the first developer conference, OpenAI has tried to make more plans for commercialization, such as planning to launch a GPT store in 2024, create an ecosystem under generative artificial intelligence, and customize large model services for enterprise users. However, It is still unclear whether such an ecosystem can be built and truly enter the era of app stores around large models. The "civil strife" on the board of directors that occurred at the end of 2023 also exposed many problems in the corporate governance structure. How OpenAI handles the relationship between the original intention of being "non-profit" and the reality of needing capital support will also be a generative factor in the future. Whether artificial intelligence can continue to develop rapidly and soundly is a major focus.