OpenAI is reportedly close to an AI breakthrough that could 'threaten humanity'

OpenAI
(Image credit: OpenAI)

Sam Altman, chief executive of OpenAI, was ousted by the company's board of directors after staff researchers warned the company's board about a potentially dangerous AI discovery named Q* (pronounced as Q-Star). Q* could radically improve artificial intelligence (AI) reasoning and could be considered as a major breakthrough in the development of artificial general intelligence, reports Reuters.

One of the peculiarities of generative AI is that it bases its answers on information that it has previously "learned" (parsed, examined, indexed, etc.), so the more data that gets fed into a model, the better the model becomes. However, modern AI technologies do not really have cognitive capabilities and cannot reason their decisions the way humans do.

Q* is believed to be a significant breakthrough in the field of artificial general intelligence (AGI), an autonomous system that can reason its decisions and, therefore, compete with humans in various tasks or even disciplines. The capabilities of Q* are said to be particularly notable in solving mathematical problems, which are typically defined by singular correct answers, which indicates a significant progression in AI's reasoning and cognitive capabilities.

The AI model has demonstrated proficiency in tackling mathematical problems at a level comparable to elementary school students, which is quite significant. Such advancements suggest that Q* could have far-reaching implications and applications in various fields that require reasoning and decision-making.

However, the advent of Q* has also sparked concerns regarding the potential risks and ethical implications of such powerful AI technology. Researchers and scientists within the AI community have raised alarms about the dangers associated with rapidly advancing AI capabilities without fully understanding their impact. The development of Q* could have become a focal point in the ongoing discussion at OpenAI about the balance between AI innovation and responsible development.

As a result, several OpenAI researchers wrote a letter to the board, highlighting the discovery of an AI model they believed could pose a significant threat to humanity. This letter is said to be a crucial factor leading up to the board's decision to fire Altman, citing a lack of confidence in his leadership despite his contributions to the company and the field of generative AI. But then, a collective threat of resignation from over 700 employees, who showed solidarity with Altman and considered joining Microsoft, made the board change its mind about the ousted CEO.

OpenAI, when contacted by Reuters, acknowledged the existence of both project Q* and the letter. However, the company refrained from commenting on the specifics of the situation or the accuracy of the media reports.

Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • -Fran-
    I, for one, welcome the AI overlords. Heh.

    Regards.
    Reply
  • Phaaze88
    Eh, the greed and selfish ambitions of a few will push us there inevitably.
    Whether I'll be alive to see it or not is another matter.

    The progression of technology is interesting, but someone always has to twist a dark version of it.
    Reply
  • vanadiel007
    Phaaze88 said:
    Eh, the greed and selfish ambitions of a few will push us there inevitably.
    Whether I'll be alive to see it or not is another matter.

    The progression of technology is interesting, but someone always has to twist a dark version of it.
    It's not so much a dark version, more of a dark future. There will be massive job loss once AI can make decisions not just based on trained data, but also based on learned data and reasoning.

    Many jobs will not require a human anymore, and you can bet companies will exploit that to the fullest.
    This will result in massive uncertainty in society, as we have never thought about a situation were the work of many humans will not be needed anymore.

    How such a new society would look like, I have no idea. But I don't think it will look as good as future thinkers think it will look.
    Reply
  • Phaaze88
    vanadiel007 said:
    It's not so much a dark version, more of a dark future. There will be massive job loss once AI can make decisions not just based on trained data, but also based on learned data and reasoning.

    Many jobs will not require a human anymore, and you can bet companies will exploit that to the fullest.
    This will result in massive uncertainty in society, as we have never thought about a situation were the work of many humans will not be needed anymore.

    How such a new society would look like, I have no idea. But I don't think it will look as good as future thinkers think it will look.
    Aye. Job security is going to be at risk - well, already at risk - for certain fields. Demand for trades(electricians, carpenters, welders, etc) will stay strong.

    What are a number of folks who have their jobs replaced by AI supposed to do? Going back to school to pick up a trade won't be an option for them all.
    Reply
  • thisisaname
    Phaaze88 said:
    Eh, the greed and selfish ambitions of a few will push us there inevitably.
    Whether I'll be alive to see it or not is another matter.

    The progression of technology is interesting, but someone always has to twist a dark version of it.
    A few years ago I read a book about two different version how society could develop and could for the life of me remember the title.
    Just now I found it via the right search terms in was Manna. Must find my copy and re-read it. It is both uplifting and depressingly dark.
    Reply
  • t3t4
    OpenAI is reportedly close to an AI breakthrough that could 'threaten humanity'

    Yes we know, it's called the "Terminator" and it knows how to time travel. It will be back at least 6 times!
    Reply
  • kb7rky
    Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 5:45am, Pacific Time, November 25, 2023.
    Welp...it's been nice knowing y'all.
    Reply
  • vanadiel007
    Phaaze88 said:
    Aye. Job security is going to be at risk - well, already at risk - for certain fields. Demand for trades(electricians, carpenters, welders, etc) will stay strong.

    What are a number of folks who have their jobs replaced by AI supposed to do? Going back to school to pick up a trade won't be an option for them all.
    Even trades are going to hurt. I was reading an article somewhere the other day that they were using the first home building robot to build a home.

    It's just the start of things. Think about car manufacturing, home manufacturing and the sheer amount of jobs that would be eliminated.

    Everybody is concerned about the impact of AI, while not many seem to be concerned about what is going to happen when 500 people are fighting over 1 job. It will be a wage massacre on a scale we never experienced, and I am not sure society as a whole would be able to deal with that situation.
    Reply