OpenAI’s Influence on Artificial Intelligence

Most of us know what AI is, or at least its general functionality and purpose. I decided to dig deeper into my knowledge of how the field began, primarily the contribution of arguably the most popular AI company: OpenAI.

On November 30, 2022, OpenAI released an AI model called ChatGPT. This model gained huge amounts of popularity among people worldwide shortly after its release. People use this model to write code, write emails, clarify complex topics, debug their code, or even write essays. Used for personal reasons, professional ones, and even educational ones.

Jumping into the history of the company. OpenAI first began in 2015 when multiple investors pledged over $1 billion to the venture. These investors included influential figures such as Elon Musk, and influential companies such as AWS and Infosys. OpenAI continued to make small advancements in the developing field. In 2018 however, Musk resigned from the board of directors because of potential conflicts with OpenAI and his own AI developments in Tesla’s self-driving cars. In 2019 they transitioned from a non-profit to a capped for-profit (a cap on the amount of profit they can take). Following their developments, they were able to release ChatGPT (after several previous non-free AI models) for free. This caused a boom in their revenue, allowing for more research into the thriving and greatly advancing field.

The AI field is an evolving one; the growing number of AI companies, but also the investments being made into the field are increasing at an exponential rate. OpenAI is mostly responsible for the recent boom in interest in AI as the ChatGPT model was the most successful AI model to be released for free. This widespread interest also increased curiosity among people, leading to a greater number of people studying artificial intelligence and even the theory behind it. OpenAI is also responsible for essentially unifying the artificial intelligence field. Prior to the launch of GPT, there lacked any common goal from AI companies. Following it, however, most of the scattered research has since been unified.

This company however was not immune to controversy, even from the key figures in this company.

On November 17, 2023, Sam Altman (CEO and Founder) was removed from the board citing a “lack of confidence” in him. Following his removal, many key OpenAI researchers resigned. Microsoft was then quick to pick up Altman (although they already own 49% of the start-up) to do their own AI research. After many employees threatened to join Microsoft unless Altman was back at OpenAI, the board was pressured to reappoint him as operational CEO. Some say that Altman was fired because of the company’s advancements into newer and more cutting-edge AI models, such as Q* (pronounced Q-Star).

Q* is an AI model created by OpenAI that largely focuses on logical and mathematical reasoning. Existing AI models focus on the knowledge that already exists (calculus among others), simply executing the knowledge at an exponentially faster rate than humans would. Q* looks beyond that. This model is expected to look much beyond that, to the lengths of creating its own mathematical/logical reasoning in order to solve problems. “If you can create an AI that can solve a problem where you know it hasn’t already seen the solution somewhere in its vast training sets, then that’s a big deal, even if the maths is relatively simple,” said Altman regarding the developing Q*.

This however raised security flags because the pace of increasing AI security was far outpaced by the development of this model. This is why Altman is rumored to be fired. Microsoft President Brad Smith argues that there are no dangerous breakthroughs with this model or research place OpenAI is currently at, saying that it’s going to take years, if not decades, where computers are more powerful than people. He however still suggests that emphasis be placed on the security level of artificial intelligence.

As we continue to incorporate artificial intelligence into our everyday lives, this field is only going to become a more vital and developed one in the coming years (if we have not reached that point already). I am excited to see how the unreleased AI models will play out in the future and hopefully educate us just as much. Much like how our futures will integrate with that of artificial intelligence, being incorporated into our college careers and further.

I also can’t help but think of the movie Terminator (and other such movies) where artificial systems are taken too far, possibly to the end of humanity. Hopefully, we avoid such an event!

5 thoughts on “OpenAI’s Influence on Artificial Intelligence

  1. Thanks for sharing your insights on AI. I am not certain quite yet if I am a fan or not. We will have to see how it goes I guess. Many questions still to answer. I have enjoyed reading your blogs this semester.

  2. Hi Adi. Ever since the release of ChatGPT in 2022, I’ve felt that AI will become an irreplaceable part of our lives. Before that time, I remember seeing stuff like people sponsoring AI or people making AI to do some tasks but 2022 was the year when people started using AI to help themselves on a wide scale of things. I think that for now AI is unpredictable, and we’ll have to wait a few years to see the direction it heads toward. What makes Q* interesting in my opinion is that if it can use its own reasoning skills to solve completely new problems, then it could be smarter than all of humanity combined, instead of just being as smart as its sources, like ChatGPT. While that sounds good for problems like solving cancer, it’s honestly scary, because a terminator situation might happen, but I hope that AI only serves to help humanity.

  3. After doing the AI research paper in Lang last year, I also became interested in AI. I think nearly everyone agrees on this, but we need to be very careful with how we let AI advance. America is not, never has been, and probably never will be a one hundred percent capitalist country. I think it’s only right to prioritize the well being of our human workers that could potentially be replaced by AI, even if that stunts our technological advancement. Additionally, I think it is really interesting that the fate of the future of AI is held within the hands of a select few corporations. The government doesn’t seem to have much involvement or regulation within the industry. If a CEO with an amazing track record like Sam Altman could go from the face of AI development to ousted from his own company and then reinstalled again in a matter of days, how does that speak to the power that we are giving individuals to control AI’s growth?

  4. Adi, I really appreciate your attention to detail in your blog post. The progression and development of AI is and will continue to be both a pressing issue and a tool in the world as a whole. Although I cannot say which side of the argument I am leaning towards. Obviously, the amenities, efficiency, and advancements that AI provides allow for a more seamless and easier work lifestyle but at what cost? I believe the loss of jobs and possible “takeover” of AI as you referenced the Terminator FIlm, is honestly quite likely. I do fear that its elimination of jobs that require less human skill can lead to the further impoverishment of the lower class workforce. For example, the electronic kiosks that you may have seen gradually appearing more and more in the chain restaurant scene could be taking away countless opportunities for work from minorities or those in need. So, I do believe that AI is a positive and useful tool although it needs to be closely monitored and used in moderation.

  5. Hey Adi,
    This was a great read, and I really liked seeing a more objective take on the openAI situation because I keep hearing about the stuff with Sam Altman, but I’ve been too lazy to do my research, so it’s nice to finally get to learn, more about what happened from the perspective of someone my age. Also, at the very end, you discussed your own thoughts and brought up some very interesting points about the relevance of AI technology. There is a sort of rule with new technology where every addition to the tech world follows a logistical curve like _/⎺. Where nothing happens, then the technology is created at a “critical point” and grows exponentially until it tapers off. The interesting thing about this with AI is that we don’t particularly know where on this curve we are since it’s difficult to predict new inventions. It’s possible what we have now with the likes of ChatGPT is as far as AI could go, or in 10 years, it could be running the entire world.

Leave a Reply

Your email address will not be published. Required fields are marked *