This is an incredibly vague essay. Let me be more explicit: I think this is a clear sign of a bubble. LLMs are very cool technology, but they are not the second coming. It can't do experiments; it doesn't have an imagination; it doesn't have an ethical framework; its not an agent in any human sense.
LLMs are awesome but I haven't felt significant improvement since the original GP4 (only in speed).
The reasoning models (o1 pro) don't have good reasoning capability when I'm asking things from them, so I don't expect o3 to be significantly better in practice even if they look good on the benchmarks.
Still, I think ARC-AGI benchmark is awesome, and the fact that they are targeting resoning is a good direction (I just think they need to research more techniques / theories).
Sonnet 3.6 (the 2022-10-22 release of Sonnet 3.5) is head and shoulders above GPT-4 and anyone who has been using both regularly can attest to this fact.
Reasoning models do reason quite well but you need the right problems to ask them. Don't throw open-ended problems at them. They perform well on problems with one (or many) correct solution(s). Code is a great example - o1 has fixed tricky code bugs for me where Sonnet and other GPT-4 class models have failed.
LLMs are leaky abstractions still - as the user, you need to know when and how to use them. This, I think, will get fixed in the 1-2 years. For now, there's no substitute for hands on time using these weird tools. But the effort is well worth it.
I’d argue that most coding problems have one truly correct solution and many many many half correct solutions.
I personally have not found AI coding assistance very helpful, but from blog posts by people who do much of the code I see from Claude is very barebones html templates and small scripts which call out to existing npm packages. Not really reasoning or problem solving per se.
I’m honestly curious to hear what tricky code bugs sonnet has helped you solve.
It’s led me down several incorrect paths, one of which actually burned me at work.
> LLMs are awesome but I haven't felt significant improvement since the original GP4 (only in speed).
Taking the outside view here - maybe you don't "feel" like it's getting better. But benchmarks aside, there are now plenty of anecdotal stories of scientists and mathematicians using them for actual work. Sometimes for simple labor-saving, but some stories of actually creative work that is partially/wholly based on interactions with LLMs. This is on top of many, many people using this for things like software development, and claiming that they get significant benefits out of these models.
>LLMs are awesome but I haven't felt significant improvement since the original GP4 (only in speed).
Absolutely disagree. Are you using LLMs for coding? There has been a 10x (or whatever) improvement since GPT4.
I causally tracked the ability of LLMs to create a processore design in a HDL since 2023. I stopped in June of 2024, because Sonnet would basically oneshot the CPU, testbench and emulator. There are another substantional update of Sonnet in October 2024.
In my mind, LLMs are lowering the barrier of searching in the same way Google did in the early 2000s. Back then, you had to very specifically tailor your search key words, not use words such as "the," "a," etc. Google eventually managed to turn queries such as "what's the population of Ghana" into ready-made answers.
LLMs do exactly that for more complex queries, with the downside of possible hallucinations. Suddenly, instead of doing a research on the topic, a person looking to become "a programmer" asks ChatGPT to create a syllabus for their situation, and possibly even actually generate the contents of the syllabus. ChatGPT then "searches the internet" and creates the response.
I have gained confidence that LLMs won't be much more (at least in the next couple years) than search engines with the upside of responding to complex queries, and downside of hallucinations. And for that, I find LLMs quite useful.
Problem is that the investors forking the money that fuels the research as well as the development and maintenance of this tech are doing so expecting huge returns that are unlikely to come during their lifetimes.
Once the AI winter comes once again, investor money will dry up as the realisation sets in that LLM evolution has peaked and it is all declining marginal utility when it comes to investing in LLMs.
Once the snow settles, only the open source models by big companies will survive and likely mostly treated as another egg in the basket of opportunities. Companies like OpenAI will be the most affected as their reason for existing is getting more value out of LLMs several orders of magnitude higher than the current one.
As AI has continued to improve quickly, it’s been interesting to watch the sentiment of the tech community get more negative on it. “It’s not very good yet.” “No improvement since GPT-4.”
Objectively, today’s AI is incredibly impressive and valuable. We blew past the Turing test and yet no one seems to marvel at that.
I’d argue and we still have yet to discover the most effective ways to incorporate the existing models into products. We could stop progress now and have compelling product launches for the next few years that change industries. I’m confident customer support will be automated shortly - a previously large industry for human employment.
Is the negative sentiment fear from tech folks because they have a lot to lose? Am I just not understanding something? It feels like I can watch the progress unfold, but yet the community here continues to say nothing is happening.
> We blew past the Turing test and yet no one seems to marvel at that.
We didn't blow past the Turing test. Such comments are often made, but I think they are a result of misunderstanding or overgeneralizing of what a Turing test is. If you interact with a chatbot and it produces human-like answers, it doesn't mean it would pass or blow past the Turing test.
Turing proposed a rigorous setup for the test, he designed it in such a way, that passing the test could really mean reaching human level intelligence. In the Turing test a human is asked to use all of their intelligence to reveal which of the two peers in a conversation is human and which is a machine. Current chatbots are very far from passing such a test.
I'll share a perspective as someone who doesn't really have a dog in the fight (For the record, I'm over 20 years into my career but don't fear losing roles/income/status due to AI, and am using it in my projects and can see plenty of ways I could benefit from it):
Lots of people on HN have been in tech for many years or a few decades and have seen several hype waves come and go, including ones involving AI. Plenty of us understand the technology that underlies current AI tech (even if we couldn't have built it ourselves). Some of us have spent plenty of time researching or contemplating nature of consciousness and the philosophy of mind, and see predictions/presumptions of human-like intelligence emerging from GPUs as at least a little silly. Plenty of us have come to know what it looks like when people are making grandiose claims – which they deeply believe to be true – particularly when great status and power seems within reach.
We can at-once happily recognise that contemporary LLMs are highly impressive and powerful, and the efforts of the researchers are brilliant and commendable, whilst also noting that these technologies have major pitfalls and limitations, and no obvious ways to resolve them.
The "blew past the Turing Test" claim is overblown, because we all know that an LLM-based product can seem human-like for much of the time, but then start generating crazy nonsense any moment. A human that behaves like that can cause millions of dollars in business losses, or planes to crash, and all kinds of other costs and harms. Humans workers are evaluated on their ability to perform at a high-level on a consistent and predictable basis. By that measure, LLMs are nowhere near good enough for critical applications yet (even if they may be better than many humans at certain things, much of the time).
The claims that LLMs will just keep improving at an accelerating rate until they don't make mistakes anymore are fair enough to make, but until we see solid evidence that it's happening and details of the technology breakthroughs that will make it happen, people are within their rights to reserve judgement.
From my perspective, the negativity stems from a general disregard of environmental impact, copyright or intellectual property, or education around hallucinations.
yes this is indeed a huge problem. all these models are trained on massive amounts of stolen data and the creators aren't receiving any of the benefit. that seems a sheer disregard for private property rights, the one thing the govt should be in charge of.
Ok, well, I guess we're not going to get a proper retrospective for any of the OpenAI stuff for awhile. That's too bad. In the spirit of the post I wish Sam had written, I'll say one thing I learned from watching the show: if you take advice even from your own board, and what they suggested fails, they will still fire you even though it was their advice. So you might as well just always do what you think is right.
I used to look forward to his takes. Some of the past posts were genuinely insightful, but now all I hear is the cliched difficult road leading to an AGI whose consequences always seem utterly dire for anyone involved, perhaps except OpenAI.
I still remember being so excited to receive my OpenAI private beta key sometime in 2020. After watching a few videos on developers talking to it, I was incredibly hyped to create something ambitious with it only to quickly become disappointed with its capabilities after trying to wrangle with a bunch of prompts.
So when ChatGPT came out, I thought it was a cool toy with a chat interface skin and nothing more. Before I knew it, AI (and its hype) had invaded a lot of unexpected corners of my life; and as more time passed, with more unexpected and perverse capabilities being discovered, I found it harder and harder to believe in all the utopian visions Sam and others preached.
Hopefully a great super-intelligent god will properly retire me and my family before all our skillsets are automated away.
>We are now confident we know how to build AGI as we have traditionally understood it
But we don't even have good definitions to work with. Does he mean AGI as in "sentience", AGI as in "superintelligence", AGI as in "can do everything textual (text in, text out) a 95th percentile human can do", or "can do everything a human on a computer can do" (closed-loop interaction with compiler, debugging, etc.).
He means "AGI is whatever it is will get me funding". AGI will be one thing to researchers, another to finance, another to your Grandma, and he will claim it to be here but also just around the corner.
AGI definition was never about superintelligence - that's ASI. The current LLM is ANI - Artificial Narrow Intelligence. For me AGI would be "95th percentile human on a computer can do".
It doesn't have to be maybe even that smart - if you would take someone with 80 IQ we would still classify them as human level intelligence - definitely smarter than most other animals and such people still useful to society and can provide value with many labour tasks.
I think what many people assume wrongly ChatGPT is not just one 1 AI - it is like millions on instances of such AI at the same time and you can probably scale to hundreds of millions of such Dumb AI. For humans you would have to do hundreds of babies and babysit/train them for minimum 5-10 years to be useful.
FWIW OpenAI themselves give a reasonably specific definition of AGI in their Charter [1]:
highly autonomous systems that outperform humans at most economically valuable work
But I guess the "as we have traditionally understood it" bit from Sam's phrasing may imply that in fact he means something other than OpenAI's own definition?
To be clear, Altman doesn't say they have achieved AGI, but that they "know how to build AGI". That's the difference between a product and a roadmap. Which are very different things, especially in cutting edge technology. Personally, I don't think they have the pieces to build anything more than an expensive agent that can act on hallucinations just as readily as accurate assessments.
> Does he mean AGI as in "sentience", AGI as in "superintelligence"
No. OpenAI and Microsoft changed their definitions of AGI to raise more money: [0]
AGI used to mean something years ago, but at this point is a meaningless term. Since the definition is different depending on who you ask.
It may mean "Super Intelligence" to AI researchers, "Raise more money to reach AGI" to investors, "Replace workers with AI Agents" to companies or "Universal Basic Income for all" to governments.
It could mean any of the above.
More accurately, it may also mean: "To raise more and more money to achieve "AGI" and replace all economically valuable work with AI agents (with no alternatives) whilst selling millions of shares to investors to enrich ourselves and changing from a non-profit to a for-profit for the benefit of humanity."
The last definition is what is happening and that looks like a scam.
>We are now confident that we can spin bullshit at unprecedented levels, and get away with it. So we now aspire to aim beyond that, to hype in purest sense of that word. We love our products, but we are here for the glorious next rounds of funding. With infinite funding, we can control the universe.
That said I think the negative arguments are overdone and AGI and agents soonish are quite likely.
> We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.
Wow. Dude has really gone "full retard" on this one. They don't have AGI but they "know" how to build one. Quick, give me a couple trillion dollars.
This post really seems desperate as it tries to touch to the inner FOMO of people. I wonder how much dumb money is still out there.
OpenAi has single handedly revolutionized the AI field. They have turned the once-thought impossible to achieve Turing test to a irrelevant side quest. Not to mention the many, many benchmarks that simply aren’t even hard enough any more to measure to quick progress AI is making.
Yet, somehow, Altman has gone ‘full retard’. Wow. Cut the guy some slack man. Next time you revolutionize an entire field and build a multi-billion company in the process you can come back to criticize him.
> Next time you revolutionize an entire field and build a multi-billion company in the process you can come back to criticize him
It doesn't really work like that. Critism is valid regardless of whether someone else has a billion dollar company or not.
Now maybe, calling him "full retard" is a bit much.
But Open AI does have a history of hyping their tech too much. Like remember the whole scare of "oh we are so afraid that we cant release this model, it is so dangerous" and then it gets released and its a chatbot.
> The two companies reportedly signed an agreement last year stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits
I need one more good year before going fully defensive in all my investments. I think it has that in it still. People still believe nonsense like he is spouting.
I’m worried to read this in case I get influenced.
I use LLMs everyday including the o1 model and the hype doesn’t match the reality, which is pretty good but like a maximum 15% increase in productivity. How are you meant to get AGI from that?
>...when any company in an important industry is in the lead, lots of people attack it for all sorts of reasons, especially when they are trying to compete with it.
>...
>We believe in the importance of being world leaders on safety and alignment research
It's interesting to consider the above excerpt, in light of the below excerpt from OpenAI's charter:
>We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions.
A whole lot of words that didn't say much about specifics of the past or specifics of the future, just pablum and positive spins. It read as if he had an LLM help out (derogatory).
"We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity."
Claiming superintelligence in this post in this form with how little LLMs are able to consistently be accurately true is beyond wishful thinking, entering magical, though throughout it all is still the stink of Fraud.
> Claiming superintelligence in this post in this form with how little LLMs are able to consistently be accurately true is beyond wishful thinking, entering magical, though throughout it all is still the stink of Fraud.
Completely. I eagerly anticipate the lawsuits when OAI crumbles financially and stakes holders want restitution. He promises far beyond what any evidence shows they have developed.
I would guess that a gazillion dollars of other people's money is hoping that he doesn't put his foot in his mouth.
So there'd probably be staff/consultants who you'd want to review these pieces before they're published.
There might also be staff/consultants who have particular goals of the writing to begin with, and who might even write the entire thing themselves, or at least itemize talking points.
To be fair I think Sam's belief is very genuine here. I remember way back in the day when asked how they would make money, he said "we will just ask the agi". People laughed then but that future seems closer than when he responded with that?
Having said that I don't think conscious AGI is possible in this path. It would probably be cracked by the Chinese because the American way is brute force. The Chinese way is more yin yang and lesser investments (forced hand maybe) which will force research on more energy conservative methods.
There you go. His blog post is literally talking about what will be discussed at the Annual WEF meeting:
From [0]:
> "While nearly 40% of global employment is exposed to AI, it is anticipated that most of this impact will be to augment work rather than to fully automate existing occupations."
The problem here is that once these AI agents as Sam just said “join the workforce”, eventually those jobs are replaced and there will be no alternative for those lost jobs. Starting with customer service and it will move up and so on.
Both of them are now coining the start of this as "The Intelligence | Intelligent Age".
“We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies.”
The most interesting argument is about the ratio of the marginal value of labor to the marginal value of everything else. Let's call it MVLE.
In prehistory, land was plentiful and hunting/gathering skill was scarce, so MVLE was high. In the middle ages, the population had exploded and arable land had become scarce. MVLE dropped dramatically. In the industrial revolution, capital accumulation began, and MVLE began rising. Once labor costs were high enough, productivity became king, and services and information goods became more prominent. MVLE rose much further.
Note the seemingly accidental correlation between MVLE, self-determination and human rights.
Now we're at a crossroads. There's a possibility that MVLE might completely bottom out. One edge outcome is a world with 100 trillionaires, 1000000 concubines, a bunch of butlers and hair stylists, and no other humans.
The key question is: whose utility will be maximized? It's clear that democracy's cracks have turned into fractures. Half of the population has an IQ below 100 and can be gulled into voting themselves into oblivion.
> Half of the population has an IQ below 100 and can be gulled into voting themselves into oblivion.
While technically true, I’d caution against assuming IQ is at all representative of average practical or applied intelligence. There’s plenty of writing out there outlining the issues with IQ as a measurement.
I also think the problems we’re struggling with—insofar as the ethics of utilization of AI—boil down to humanity’s overall inability thus far to reach consensus on what we want to optimize our society for: the most good for the most people, the most good for some people, or something completely orthogonal to good or bad for any number of people (pursuit of knowledge, culture, unity, whatever).
Regardless, a lot of folks seem to have good and widely agreed upon ideas of what we do not want and I’d love to see more conversations around how we prevent the worst case scenarios given the current political and economic environment.
> I’d caution against assuming IQ is at all representative of average practical or applied intelligence
I don't necessarily disagree. Unfortunately, demagoguery has paid off handsomely in recent years. It has been refined as a discipline to the point that the "self-destruction gullibility threshold" might actually be much higher than 100.
Can they square “we know how to build AGI” with “a bunch of our executive team has left in the past 12 months when they clearly knew what he was talking about”? Why would you leave if AGI was just around the corner?
(Which is to say, it isn’t. This has to be LLMs: the final frontier)
Strong words! And the other frontier labs won't be far behind them. The world is about to change massively.
I don't see why people are calling this "vague". It's not an announcement of GPT-5 or anything, but this seems pretty specific to me. It's good to have confirmation that OpenAI doesn't see any serious barriers to achieving AGI in the next year or two. Lots of people arguing otherwise these days and this is a direct refutation.
And, if AGI is not achievable by OpenAI in the next two years, then this is about as close as you can slice fraud with a team of lawyers reviewing every public statement.
Either way, the AI bubble is going to start popping if he can’t demonstrate meaningful progress - into the realm of making decisions and taking actions - within the next year.
Almost anything said from here on in will be an appeal to authority, but putting the logical fallacy issue to one side, that's a pretty lame chart. The y axis is highly contestable and essentially undefined.
It should be nothing personnel in the sense of LLMs not being considered people. It's seeming that it will more likely be nothing personnel in terms of nothing personnel, kid.
Let's play the devil's advocate. And let's say Sam is a conman, any definition of AGI or ASI will never arrive.
What has he to gain from all these? Assuming the bubble will burst in the end. Fortune? He will get his salary over the course of the bubble. I am not entirely sure how he can make money if in the end the money runs out.
Fame and Connections?
While I have very little technical idea about anything LLM, I am well versed in the Foundry and Hardware Manufacturing and Supply Chain. Ever since he asked for a Trillion dollar to build Fabs, and Chips for AI I have a very cynical view on him.
> What has he to gain from all these? Assuming the bubble will burst in the end. Fortune? He will get his salary over the course of the bubble. I am not entirely sure how he can make money if in the end the money runs out.
He can make a TON of money by making the company into a for-profit with stocks he can sell. He can do this very very quickly before the bubble pops. This is the real reason he has been pushing for the for-profit change to happen soon. So he doesn't have to rely on salary and instead the equity. As long as he gets out before the burst, he'll have his fortune
Why did/do so many people still invest in/create crypto even when they know it's fraudulent?
Oh yes Thank you for the correction. I remember it was some insanely ridiculous number in the trillion. And then talking about instead of UBI how about giving Universal Basic Compute.
Naturally no comments on change in governance or profit structure in these reflections.
He comments on the founding of OpenAI as though OpenAI the, currently, capped-profit company and OpenAI the non-profit which today controls it are the same thing. They are not and they are planned to be split, a split that cannot possibly be justifiable under the OpenAI (the non-profit) charter.
As I watch OpenAI's structural development over time, it becomes increasingly clear that the wildly incompetent board of OpenAI had something justifiable in their firing of Sam.
> We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies.
This goes with the same agenda with the World Economic Forum (WEF) of "Collaboration for the Intelligent Age" [0] which Sam is also attempting to coin this with a similar title (The "Intelligence" / "Intelligent" Age). [1]
It will be no surprise the he will be invited to tell us all about how AGI will bring the utopia of Universal Basic Income (UBI) to everyone and save the world for the "benefit of humanity".
The truth is, "AGI" is a massive scam to raise more money to inevitably replace workers with AI and race all of it to zero, without any alternative for those lost jobs.
Sam Altman is just yet another pawn used by billionaires to make the billionaire class even richer, push de-regulation propaganda, and push awful neo-liberalism economics.
Hopefully someday LLMs will be hugely beneficial to society for their ability to identify correlations in data. And, by their very nature, they are good with language and thus helpful in programming contexts.
But LLMs have no understanding of what they write.
They do not ponder.
Are not curious; do not wonder.
Do not think or have thoughts.
Do not create or invent.
Do not have Aha! moments.
Maybe some day a machine will be capable of these things. Maybe not. But LLMs - by nature of their algorithmic design - never will.
On Nov 30, 2022 Sam Altman knew exactly how LLMs work. He knew the design of his LLMs were such that they were not - and would never be - sentient. They would never be capable of "understanding" in the manner than living things understand the world.. or even the way human beings understand a sentence.
Yet, very soon after Nov 30, 2022 Sam Altman was making statements about how important it was for governments worldwide to address the danger ChatGPI posed before it was too late.
He was on the hype train before Nov 30, 2022.
The Nov 30, 2022 announcement was itself part of the hype train.
OpenAI, Google, Microsoft, Meta, Apple, IBM, etc etc have spent - and continue to spend - billions on LLMs. And just like Altman, they know exactly how LLMs work. They know the design of their LLMs are such that they are not - and never will be - sentient. LLMs will never be capable of "understanding" in the manner than living things understand the world.. or even the way human beings understand a sentence.
Yet they continue moving the hype wagon forward, faster and faster.
Someone is making lots of money. And soon so many more will lose so much more.
what is the message? I think it's there are people behind this with human and relatable motives and foibles, the consequence of this change is difficult to apprehend, the main tool they have is to be incremental, and it's just hitting its stride.
the comment about understanding what AGI means was compelling. I'd guess it may be something arrestingly simple, and they have the sense to not be meta about it, or to sound it out with freighted and inadequate words- they will just introduce it as it is.
It's unfortunate because Sam didn't used to be this way[0]. Once you reach such a high position, saying anything of substance appears to be too great a risk.
Boy we have lost trust in our boy big time. I feel the essay is earnest. He's not giving concrete details, but I'd like to give hi my a benefit of doubt.
I think these are the least insightful comments I’ve seen on HN, maybe ever.
Hating from the sidelines is easy. AGI (or whatever semantics you prefer) is simultaneously:
1: One of the most positively influential technologies developed in human history
2: Magnifies the failures of our current governance structures 1,000 fold.
Adaptation is/will be required, and it’s not going to be easy. But the finish line promises a significantly better future for (potentially) all. Sitting here arguing semantics, complaining about technological evolution (surface level insights, I suggest going a level deeper), or making weird anti-altman statements instead of anything substantive is, well, not interesting.
I don’t think anyone seriously views technological developments as having ultra clear cut start and finish lines.
Engineering/technological innovation is an iterative process and we have steps that provide value and capture value to then generate and provide more value to capture and so on and so forth.
I don’t think having a universal agreement of what AGI is matters, like at all.
>But the finish line promises a significantly better future for (potentially) all
Citation very much needed. Maybe better for the ultrarich owners, but for the billions out of work trying to find how they're going to get food and healthcare? No.
>Adaptation is/will be required, and it’s not going to be easy.
"Some of you may have to die, and that's a sacrifice I'm willing to make" [1]
Sounds like you have an issue with our current governance structures which allows too much value capture to proceed to the ultra wealthy untaxed, despite them relying on the society supported by said taxes to generate their massive wealth.
This will be true whether you’re for/against AGI development. You’re not solving the underlying problem, you’re just kicking the bucket down the road for others to deal with.
AI technology is dramatically accelerating these trends and enabling new ways and is itself a problem.
There is a common trope of claiming technology isn't a problem that simply isn't true. Cheap and available technology absolutely changes models. In the 1960s the Stasi could pay a bunch of people to monitor cameras and microphones but it was incredibly expensive. Cheap cameras, cheap hard drives to store footage indefinitely, and cheap image/voice recognition all enable new horrifying forms of control, surveillance, and punishment even if the actual capability to watch people is not new.
I'm just saying that cameras were always going to become cheap. So were hard drives, and image/voice recognition. This was always going to happen, and always will happen, because it is a net good thing overall. However, it also exposes new means of exploitation that were did not exist before. AKA, technology solves two problems, and creates a brand new one.
It is futile (and dare I say useless) to cry fowl that technological innovation is happening. Instead, we should be revisiting our existing governance structures and remodel them according to the new reality we live in.
You're arguing against things I didn't say at all. You should re-read my initial comment and try to not project <groupsay> onto it.
This is an incredibly vague essay. Let me be more explicit: I think this is a clear sign of a bubble. LLMs are very cool technology, but they are not the second coming. It can't do experiments; it doesn't have an imagination; it doesn't have an ethical framework; its not an agent in any human sense.
LLMs are awesome but I haven't felt significant improvement since the original GP4 (only in speed).
The reasoning models (o1 pro) don't have good reasoning capability when I'm asking things from them, so I don't expect o3 to be significantly better in practice even if they look good on the benchmarks.
Still, I think ARC-AGI benchmark is awesome, and the fact that they are targeting resoning is a good direction (I just think they need to research more techniques / theories).
I disagree.
Sonnet 3.6 (the 2022-10-22 release of Sonnet 3.5) is head and shoulders above GPT-4 and anyone who has been using both regularly can attest to this fact.
Reasoning models do reason quite well but you need the right problems to ask them. Don't throw open-ended problems at them. They perform well on problems with one (or many) correct solution(s). Code is a great example - o1 has fixed tricky code bugs for me where Sonnet and other GPT-4 class models have failed.
LLMs are leaky abstractions still - as the user, you need to know when and how to use them. This, I think, will get fixed in the 1-2 years. For now, there's no substitute for hands on time using these weird tools. But the effort is well worth it.
> one (or many) correct solution(s).
> Code is a great example
I’d argue that most coding problems have one truly correct solution and many many many half correct solutions.
I personally have not found AI coding assistance very helpful, but from blog posts by people who do much of the code I see from Claude is very barebones html templates and small scripts which call out to existing npm packages. Not really reasoning or problem solving per se.
I’m honestly curious to hear what tricky code bugs sonnet has helped you solve.
It’s led me down several incorrect paths, one of which actually burned me at work.
> LLMs are awesome but I haven't felt significant improvement since the original GP4 (only in speed).
Taking the outside view here - maybe you don't "feel" like it's getting better. But benchmarks aside, there are now plenty of anecdotal stories of scientists and mathematicians using them for actual work. Sometimes for simple labor-saving, but some stories of actually creative work that is partially/wholly based on interactions with LLMs. This is on top of many, many people using this for things like software development, and claiming that they get significant benefits out of these models.
>LLMs are awesome but I haven't felt significant improvement since the original GP4 (only in speed).
Absolutely disagree. Are you using LLMs for coding? There has been a 10x (or whatever) improvement since GPT4.
I causally tracked the ability of LLMs to create a processore design in a HDL since 2023. I stopped in June of 2024, because Sonnet would basically oneshot the CPU, testbench and emulator. There are another substantional update of Sonnet in October 2024.
https://github.com/cpldcpu/LLM_HDL_Design
In my mind, LLMs are lowering the barrier of searching in the same way Google did in the early 2000s. Back then, you had to very specifically tailor your search key words, not use words such as "the," "a," etc. Google eventually managed to turn queries such as "what's the population of Ghana" into ready-made answers.
LLMs do exactly that for more complex queries, with the downside of possible hallucinations. Suddenly, instead of doing a research on the topic, a person looking to become "a programmer" asks ChatGPT to create a syllabus for their situation, and possibly even actually generate the contents of the syllabus. ChatGPT then "searches the internet" and creates the response.
I have gained confidence that LLMs won't be much more (at least in the next couple years) than search engines with the upside of responding to complex queries, and downside of hallucinations. And for that, I find LLMs quite useful.
Problem is that the investors forking the money that fuels the research as well as the development and maintenance of this tech are doing so expecting huge returns that are unlikely to come during their lifetimes.
Once the AI winter comes once again, investor money will dry up as the realisation sets in that LLM evolution has peaked and it is all declining marginal utility when it comes to investing in LLMs.
Once the snow settles, only the open source models by big companies will survive and likely mostly treated as another egg in the basket of opportunities. Companies like OpenAI will be the most affected as their reason for existing is getting more value out of LLMs several orders of magnitude higher than the current one.
As AI has continued to improve quickly, it’s been interesting to watch the sentiment of the tech community get more negative on it. “It’s not very good yet.” “No improvement since GPT-4.”
Objectively, today’s AI is incredibly impressive and valuable. We blew past the Turing test and yet no one seems to marvel at that.
I’d argue and we still have yet to discover the most effective ways to incorporate the existing models into products. We could stop progress now and have compelling product launches for the next few years that change industries. I’m confident customer support will be automated shortly - a previously large industry for human employment.
Is the negative sentiment fear from tech folks because they have a lot to lose? Am I just not understanding something? It feels like I can watch the progress unfold, but yet the community here continues to say nothing is happening.
> We blew past the Turing test and yet no one seems to marvel at that.
We didn't blow past the Turing test. Such comments are often made, but I think they are a result of misunderstanding or overgeneralizing of what a Turing test is. If you interact with a chatbot and it produces human-like answers, it doesn't mean it would pass or blow past the Turing test. Turing proposed a rigorous setup for the test, he designed it in such a way, that passing the test could really mean reaching human level intelligence. In the Turing test a human is asked to use all of their intelligence to reveal which of the two peers in a conversation is human and which is a machine. Current chatbots are very far from passing such a test.
I'll share a perspective as someone who doesn't really have a dog in the fight (For the record, I'm over 20 years into my career but don't fear losing roles/income/status due to AI, and am using it in my projects and can see plenty of ways I could benefit from it):
Lots of people on HN have been in tech for many years or a few decades and have seen several hype waves come and go, including ones involving AI. Plenty of us understand the technology that underlies current AI tech (even if we couldn't have built it ourselves). Some of us have spent plenty of time researching or contemplating nature of consciousness and the philosophy of mind, and see predictions/presumptions of human-like intelligence emerging from GPUs as at least a little silly. Plenty of us have come to know what it looks like when people are making grandiose claims – which they deeply believe to be true – particularly when great status and power seems within reach.
We can at-once happily recognise that contemporary LLMs are highly impressive and powerful, and the efforts of the researchers are brilliant and commendable, whilst also noting that these technologies have major pitfalls and limitations, and no obvious ways to resolve them.
The "blew past the Turing Test" claim is overblown, because we all know that an LLM-based product can seem human-like for much of the time, but then start generating crazy nonsense any moment. A human that behaves like that can cause millions of dollars in business losses, or planes to crash, and all kinds of other costs and harms. Humans workers are evaluated on their ability to perform at a high-level on a consistent and predictable basis. By that measure, LLMs are nowhere near good enough for critical applications yet (even if they may be better than many humans at certain things, much of the time).
The claims that LLMs will just keep improving at an accelerating rate until they don't make mistakes anymore are fair enough to make, but until we see solid evidence that it's happening and details of the technology breakthroughs that will make it happen, people are within their rights to reserve judgement.
From my perspective, the negativity stems from a general disregard of environmental impact, copyright or intellectual property, or education around hallucinations.
yes this is indeed a huge problem. all these models are trained on massive amounts of stolen data and the creators aren't receiving any of the benefit. that seems a sheer disregard for private property rights, the one thing the govt should be in charge of.
Ok, well, I guess we're not going to get a proper retrospective for any of the OpenAI stuff for awhile. That's too bad. In the spirit of the post I wish Sam had written, I'll say one thing I learned from watching the show: if you take advice even from your own board, and what they suggested fails, they will still fire you even though it was their advice. So you might as well just always do what you think is right.
This applies to other leadership roles as well.
I used to look forward to his takes. Some of the past posts were genuinely insightful, but now all I hear is the cliched difficult road leading to an AGI whose consequences always seem utterly dire for anyone involved, perhaps except OpenAI.
I feel an ever growing mix of awe and horror when I read Sam Altman’s plans.
I still remember being so excited to receive my OpenAI private beta key sometime in 2020. After watching a few videos on developers talking to it, I was incredibly hyped to create something ambitious with it only to quickly become disappointed with its capabilities after trying to wrangle with a bunch of prompts.
So when ChatGPT came out, I thought it was a cool toy with a chat interface skin and nothing more. Before I knew it, AI (and its hype) had invaded a lot of unexpected corners of my life; and as more time passed, with more unexpected and perverse capabilities being discovered, I found it harder and harder to believe in all the utopian visions Sam and others preached.
Hopefully a great super-intelligent god will properly retire me and my family before all our skillsets are automated away.
>We are now confident we know how to build AGI as we have traditionally understood it
But we don't even have good definitions to work with. Does he mean AGI as in "sentience", AGI as in "superintelligence", AGI as in "can do everything textual (text in, text out) a 95th percentile human can do", or "can do everything a human on a computer can do" (closed-loop interaction with compiler, debugging, etc.).
He means "AGI is whatever it is will get me funding". AGI will be one thing to researchers, another to finance, another to your Grandma, and he will claim it to be here but also just around the corner.
It’s AGI as in Attract Gullible Investors. ;)
AGI definition was never about superintelligence - that's ASI. The current LLM is ANI - Artificial Narrow Intelligence. For me AGI would be "95th percentile human on a computer can do".
It doesn't have to be maybe even that smart - if you would take someone with 80 IQ we would still classify them as human level intelligence - definitely smarter than most other animals and such people still useful to society and can provide value with many labour tasks.
I think what many people assume wrongly ChatGPT is not just one 1 AI - it is like millions on instances of such AI at the same time and you can probably scale to hundreds of millions of such Dumb AI. For humans you would have to do hundreds of babies and babysit/train them for minimum 5-10 years to be useful.
Allegedly, attaining AGI will get them out of the Microsoft deal (if I understood correctly on a recent Pivot episode).
Notice the lawyerly « AGI as we have traditionally understood it »
The specific definition they have is that they make 100 billion in revenue.
https://www.theverge.com/2024/12/26/24329618/openai-microsof...
That's so orthogonal to AGI that it was a huge win for Altman to make it the criteria.
FWIW OpenAI themselves give a reasonably specific definition of AGI in their Charter [1]:
highly autonomous systems that outperform humans at most economically valuable work
But I guess the "as we have traditionally understood it" bit from Sam's phrasing may imply that in fact he means something other than OpenAI's own definition?
[1] https://openai.com/charter/
The circularity is an issue: as machines do work, it becomes less valuable.
Highly autonomous systems already outperform humans for the vast majority of the economically valuable work of the 1400s economy.
He did though mention that it is AGI as "they" have traditionally understood. I think it's O3, the super expensive O1.
To be clear, Altman doesn't say they have achieved AGI, but that they "know how to build AGI". That's the difference between a product and a roadmap. Which are very different things, especially in cutting edge technology. Personally, I don't think they have the pieces to build anything more than an expensive agent that can act on hallucinations just as readily as accurate assessments.
> Does he mean AGI as in "sentience", AGI as in "superintelligence"
No. OpenAI and Microsoft changed their definitions of AGI to raise more money: [0]
AGI used to mean something years ago, but at this point is a meaningless term. Since the definition is different depending on who you ask.
It may mean "Super Intelligence" to AI researchers, "Raise more money to reach AGI" to investors, "Replace workers with AI Agents" to companies or "Universal Basic Income for all" to governments.
It could mean any of the above.
More accurately, it may also mean: "To raise more and more money to achieve "AGI" and replace all economically valuable work with AI agents (with no alternatives) whilst selling millions of shares to investors to enrich ourselves and changing from a non-profit to a for-profit for the benefit of humanity."
The last definition is what is happening and that looks like a scam.
[0] https://archive.ph/pmudc
Garry Marcus's take:
>We are now confident that we can spin bullshit at unprecedented levels, and get away with it. So we now aspire to aim beyond that, to hype in purest sense of that word. We love our products, but we are here for the glorious next rounds of funding. With infinite funding, we can control the universe.
That said I think the negative arguments are overdone and AGI and agents soonish are quite likely.
> We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.
Wow. Dude has really gone "full retard" on this one. They don't have AGI but they "know" how to build one. Quick, give me a couple trillion dollars.
This post really seems desperate as it tries to touch to the inner FOMO of people. I wonder how much dumb money is still out there.
OpenAi has single handedly revolutionized the AI field. They have turned the once-thought impossible to achieve Turing test to a irrelevant side quest. Not to mention the many, many benchmarks that simply aren’t even hard enough any more to measure to quick progress AI is making.
Yet, somehow, Altman has gone ‘full retard’. Wow. Cut the guy some slack man. Next time you revolutionize an entire field and build a multi-billion company in the process you can come back to criticize him.
> Next time you revolutionize an entire field and build a multi-billion company in the process you can come back to criticize him
It doesn't really work like that. Critism is valid regardless of whether someone else has a billion dollar company or not.
Now maybe, calling him "full retard" is a bit much.
But Open AI does have a history of hyping their tech too much. Like remember the whole scare of "oh we are so afraid that we cant release this model, it is so dangerous" and then it gets released and its a chatbot.
Remember they have a very specific definition of "AGI", which they want to meet quickly so they can charge more money from Microsoft.
https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
> The two companies reportedly signed an agreement last year stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits
I need one more good year before going fully defensive in all my investments. I think it has that in it still. People still believe nonsense like he is spouting.
I’m worried to read this in case I get influenced.
I use LLMs everyday including the o1 model and the hype doesn’t match the reality, which is pretty good but like a maximum 15% increase in productivity. How are you meant to get AGI from that?
You won't get influenced about anything, it's basically empty self-backpats.
15% is pretty huge. It's like getting 6 hours extra each week.
Decision making, self-building APIs, self-testing software, etc. etc. etc.
I'm not even a big believer, but depended on how you define AGI, it's a range from not in a life-time to might happen by 2030.
15% increase in productivity and increasing is extremely worth the hype, that's massive
summary: we need marketing to prepare for the next seed funding, we burn a billion a month in losses. It's going to be a big bubble!
That all seems mostly reasonable
Bit surprised about the AGI part and also the agent „join workforce“ comment. I thought it’s their policy to not anthropomorphize ?
You can't raise the big bucks unless you sell to investor's dreams.
> I also learned the importance of a board with diverse viewpoints and broad experience in managing a complex set of challenges.
Comic book level villainy. I like the guy!
>...when any company in an important industry is in the lead, lots of people attack it for all sorts of reasons, especially when they are trying to compete with it.
>...
>We believe in the importance of being world leaders on safety and alignment research
It's interesting to consider the above excerpt, in light of the below excerpt from OpenAI's charter:
>We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions.
https://openai.com/charter/
OpenAI doesn't claim to be leaders in safety and alignment research.
They "believe in the importance" of being leaders in safety and alignment research. For whatever that's worth.
But they do acknowledge themselves as the leading AI company.
Is it really in our interest as a species for the leading AI company to merely "believe in the importance" of leadership in safety and alignment?
Among the "all sorts of reasons" for people to attack the leading AI company, this strikes me as a fairly legitimate one. Just saying.
Also -- I notice that OpenAI seems to be criticized more than leading companies in other industries.
If maintaining your lead isn't in the interest of your stated mission statement... maybe you shouldn't actually be working to maintain that lead?
Did Sam or OpenAI ever publicly respond to Jan Leike's comments when he left? (Former head of alignment) https://threadreaderapp.com/thread/1791498174659715494.html
See also: https://openasteroidimpact.org/
A whole lot of words that didn't say much about specifics of the past or specifics of the future, just pablum and positive spins. It read as if he had an LLM help out (derogatory).
"We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity."
Claiming superintelligence in this post in this form with how little LLMs are able to consistently be accurately true is beyond wishful thinking, entering magical, though throughout it all is still the stink of Fraud.
> Claiming superintelligence in this post in this form with how little LLMs are able to consistently be accurately true is beyond wishful thinking, entering magical, though throughout it all is still the stink of Fraud.
Completely. I eagerly anticipate the lawsuits when OAI crumbles financially and stakes holders want restitution. He promises far beyond what any evidence shows they have developed.
Superintelligence without supermorality is superscary.
The Luddites have always been right. Technological change must be paired with economic justice.
I would guess that a gazillion dollars of other people's money is hoping that he doesn't put his foot in his mouth.
So there'd probably be staff/consultants who you'd want to review these pieces before they're published.
There might also be staff/consultants who have particular goals of the writing to begin with, and who might even write the entire thing themselves, or at least itemize talking points.
To be fair I think Sam's belief is very genuine here. I remember way back in the day when asked how they would make money, he said "we will just ask the agi". People laughed then but that future seems closer than when he responded with that?
Having said that I don't think conscious AGI is possible in this path. It would probably be cracked by the Chinese because the American way is brute force. The Chinese way is more yin yang and lesser investments (forced hand maybe) which will force research on more energy conservative methods.
The key to conscious AI is energy conservation.
There you go. His blog post is literally talking about what will be discussed at the Annual WEF meeting:
From [0]:
> "While nearly 40% of global employment is exposed to AI, it is anticipated that most of this impact will be to augment work rather than to fully automate existing occupations."
The problem here is that once these AI agents as Sam just said “join the workforce”, eventually those jobs are replaced and there will be no alternative for those lost jobs. Starting with customer service and it will move up and so on.
Both of them are now coining the start of this as "The Intelligence | Intelligent Age".
[0] https://www.weforum.org/meetings/world-economic-forum-annual...
“We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies.”
The most interesting argument is about the ratio of the marginal value of labor to the marginal value of everything else. Let's call it MVLE.
In prehistory, land was plentiful and hunting/gathering skill was scarce, so MVLE was high. In the middle ages, the population had exploded and arable land had become scarce. MVLE dropped dramatically. In the industrial revolution, capital accumulation began, and MVLE began rising. Once labor costs were high enough, productivity became king, and services and information goods became more prominent. MVLE rose much further.
Note the seemingly accidental correlation between MVLE, self-determination and human rights.
Now we're at a crossroads. There's a possibility that MVLE might completely bottom out. One edge outcome is a world with 100 trillionaires, 1000000 concubines, a bunch of butlers and hair stylists, and no other humans.
The key question is: whose utility will be maximized? It's clear that democracy's cracks have turned into fractures. Half of the population has an IQ below 100 and can be gulled into voting themselves into oblivion.
> Half of the population has an IQ below 100 and can be gulled into voting themselves into oblivion.
While technically true, I’d caution against assuming IQ is at all representative of average practical or applied intelligence. There’s plenty of writing out there outlining the issues with IQ as a measurement.
I also think the problems we’re struggling with—insofar as the ethics of utilization of AI—boil down to humanity’s overall inability thus far to reach consensus on what we want to optimize our society for: the most good for the most people, the most good for some people, or something completely orthogonal to good or bad for any number of people (pursuit of knowledge, culture, unity, whatever).
Regardless, a lot of folks seem to have good and widely agreed upon ideas of what we do not want and I’d love to see more conversations around how we prevent the worst case scenarios given the current political and economic environment.
> I’d caution against assuming IQ is at all representative of average practical or applied intelligence
I don't necessarily disagree. Unfortunately, demagoguery has paid off handsomely in recent years. It has been refined as a discipline to the point that the "self-destruction gullibility threshold" might actually be much higher than 100.
Can they square “we know how to build AGI” with “a bunch of our executive team has left in the past 12 months when they clearly knew what he was talking about”? Why would you leave if AGI was just around the corner?
(Which is to say, it isn’t. This has to be LLMs: the final frontier)
[dead]
Strong words! And the other frontier labs won't be far behind them. The world is about to change massively.
I don't see why people are calling this "vague". It's not an announcement of GPT-5 or anything, but this seems pretty specific to me. It's good to have confirmation that OpenAI doesn't see any serious barriers to achieving AGI in the next year or two. Lots of people arguing otherwise these days and this is a direct refutation.
And, if AGI is not achievable by OpenAI in the next two years, then this is about as close as you can slice fraud with a team of lawyers reviewing every public statement.
Through that juxtaposition, Sam is likely going to claim that some implementation of AI agents will be AGI.
That's going to be very hard to argue.
His confidence is misplaced. He doesn't get to determine if AGI has been achieved. Right now, it's marketing speak.
What he's selling is brand Altman.
Or he knows more than he’s letting on.
Either way, the AI bubble is going to start popping if he can’t demonstrate meaningful progress - into the realm of making decisions and taking actions - within the next year.
Frankly, I think he’s exaggerating.
Passing difficult tests is no small feat... https://www.reddit.com/r/singularity/comments/1hj2tvj/its_ha...
Almost anything said from here on in will be an appeal to authority, but putting the logical fallacy issue to one side, that's a pretty lame chart. The y axis is highly contestable and essentially undefined.
Reddit is not your friend in the citation stakes.
Dismissive are we? Quibbling about a clear hockey stick growth curve.
These comments:
> Now do the same for other evaluations, remove the o family, nudge the time scale a bit, and watch the same curve pop out.
> This is called eval saturation, not tech singularity. ARC-2 is already in production btw.
A reply:
> You act like that isnt significant, people just hand wave "eval saturation"
> The fact that we keep having to make new benchmarks because ai keep beating the ones we have is extremely significant.
Agree with this. The pace has been mind boggling.
If you make the y axis "AGI points" you can make the graph look like whatever you want depending on how you define "AGI points".
The pace has been impressive, but until hallucinations are addressed the more faith and capital you put behind an AI agent, the more you risk losing.
Passing tests only by spending thousands in compute is good for research but not good for a recently-restructured-for-profit business.
Their definition of AGI in their agreement with Microsoft is a system that makes $100 billion in profits.
Note that they have an incentive to not get a system classified as AGI.
Hard to argue against, or in favor of?
They will certainly change the output, as we've already seen, likely not for the better unless something changes.
It should be nothing personnel in the sense of LLMs not being considered people. It's seeming that it will more likely be nothing personnel in terms of nothing personnel, kid.
Lord I hope the money runs out soon.
Let's play the devil's advocate. And let's say Sam is a conman, any definition of AGI or ASI will never arrive.
What has he to gain from all these? Assuming the bubble will burst in the end. Fortune? He will get his salary over the course of the bubble. I am not entirely sure how he can make money if in the end the money runs out.
Fame and Connections?
While I have very little technical idea about anything LLM, I am well versed in the Foundry and Hardware Manufacturing and Supply Chain. Ever since he asked for a Trillion dollar to build Fabs, and Chips for AI I have a very cynical view on him.
> What has he to gain from all these? Assuming the bubble will burst in the end. Fortune? He will get his salary over the course of the bubble. I am not entirely sure how he can make money if in the end the money runs out.
He can make a TON of money by making the company into a for-profit with stocks he can sell. He can do this very very quickly before the bubble pops. This is the real reason he has been pushing for the for-profit change to happen soon. So he doesn't have to rely on salary and instead the equity. As long as he gets out before the burst, he'll have his fortune
Why did/do so many people still invest in/create crypto even when they know it's fraudulent?
Oh yes Thank you for the correction. I remember it was some insanely ridiculous number in the trillion. And then talking about instead of UBI how about giving Universal Basic Compute.
Presumably he’s at least partly conning himself.
Bubbles are defined by delusional speculation. The beliefs are the motivation; there's no reason.
He'll get the next funding round.
Naturally no comments on change in governance or profit structure in these reflections.
He comments on the founding of OpenAI as though OpenAI the, currently, capped-profit company and OpenAI the non-profit which today controls it are the same thing. They are not and they are planned to be split, a split that cannot possibly be justifiable under the OpenAI (the non-profit) charter.
As I watch OpenAI's structural development over time, it becomes increasingly clear that the wildly incompetent board of OpenAI had something justifiable in their firing of Sam.
> We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies.
This goes with the same agenda with the World Economic Forum (WEF) of "Collaboration for the Intelligent Age" [0] which Sam is also attempting to coin this with a similar title (The "Intelligence" / "Intelligent" Age). [1]
It will be no surprise the he will be invited to tell us all about how AGI will bring the utopia of Universal Basic Income (UBI) to everyone and save the world for the "benefit of humanity".
The truth is, "AGI" is a massive scam to raise more money to inevitably replace workers with AI and race all of it to zero, without any alternative for those lost jobs.
[0] https://www.weforum.org/meetings/world-economic-forum-annual...
[1] https://ia.samaltman.com/
Reminds me of short story The Phools, by Stanislav Lem.
Sam Altman is just yet another pawn used by billionaires to make the billionaire class even richer, push de-regulation propaganda, and push awful neo-liberalism economics.
Hopefully someday LLMs will be hugely beneficial to society for their ability to identify correlations in data. And, by their very nature, they are good with language and thus helpful in programming contexts.
But LLMs have no understanding of what they write. They do not ponder. Are not curious; do not wonder. Do not think or have thoughts. Do not create or invent. Do not have Aha! moments.
Maybe some day a machine will be capable of these things. Maybe not. But LLMs - by nature of their algorithmic design - never will.
On Nov 30, 2022 Sam Altman knew exactly how LLMs work. He knew the design of his LLMs were such that they were not - and would never be - sentient. They would never be capable of "understanding" in the manner than living things understand the world.. or even the way human beings understand a sentence.
Yet, very soon after Nov 30, 2022 Sam Altman was making statements about how important it was for governments worldwide to address the danger ChatGPI posed before it was too late.
He was on the hype train before Nov 30, 2022.
The Nov 30, 2022 announcement was itself part of the hype train.
OpenAI, Google, Microsoft, Meta, Apple, IBM, etc etc have spent - and continue to spend - billions on LLMs. And just like Altman, they know exactly how LLMs work. They know the design of their LLMs are such that they are not - and never will be - sentient. LLMs will never be capable of "understanding" in the manner than living things understand the world.. or even the way human beings understand a sentence.
Yet they continue moving the hype wagon forward, faster and faster.
Someone is making lots of money. And soon so many more will lose so much more.
Still not on board with the whole scaling up LLMs -> AGI thesis.
> We are now confident we know how to build AGI as we have traditionally understood it.
Yes. Its just going to be another marketing term.
So Sam Altman is looking to keep the grift alive, gotcha
Is this like three inches wide on the screen so it looks longer? Sama padding the essay?
god, what a cunt
I stopped reading after “as we get closer to AGI.”
what is the message? I think it's there are people behind this with human and relatable motives and foibles, the consequence of this change is difficult to apprehend, the main tool they have is to be incremental, and it's just hitting its stride.
the comment about understanding what AGI means was compelling. I'd guess it may be something arrestingly simple, and they have the sense to not be meta about it, or to sound it out with freighted and inadequate words- they will just introduce it as it is.
good luck.
[flagged]
Nailed it, one of the least inspiring personalities
I think the problem exacerbates as he tries so hard to appear more sincere and interesting than he truly is.
It's unfortunate because Sam didn't used to be this way[0]. Once you reach such a high position, saying anything of substance appears to be too great a risk.
[0] https://m.youtube.com/watch?v=sYMqVwsewSg
Elon is a clear counterpoint to this.
And Buffett/Munger
And Gates
There was this guy called Samsa ...
( https://en.wikipedia.org/wiki/The_Metamorphosis )
The "reflections" read as if it was generated by chatgpt itself, smh.
AI company says AI is the future and to buy now.
[dead]
[flagged]
[dead]
[flagged]
Also a good interview in Bloomberg: https://www.bloomberg.com/features/2025-sam-altman-interview
Boy we have lost trust in our boy big time. I feel the essay is earnest. He's not giving concrete details, but I'd like to give hi my a benefit of doubt.
I think these are the least insightful comments I’ve seen on HN, maybe ever.
Hating from the sidelines is easy. AGI (or whatever semantics you prefer) is simultaneously: 1: One of the most positively influential technologies developed in human history 2: Magnifies the failures of our current governance structures 1,000 fold.
Adaptation is/will be required, and it’s not going to be easy. But the finish line promises a significantly better future for (potentially) all. Sitting here arguing semantics, complaining about technological evolution (surface level insights, I suggest going a level deeper), or making weird anti-altman statements instead of anything substantive is, well, not interesting.
> But the finish line promises a significantly better future
The entire reason that people are hating from the sidelines is because there's no clear finish line.
"Just wait for AGI" is as unsubstantial as the comments you are criticizing.
The proverbial finish line.
I don’t think anyone seriously views technological developments as having ultra clear cut start and finish lines.
Engineering/technological innovation is an iterative process and we have steps that provide value and capture value to then generate and provide more value to capture and so on and so forth.
I don’t think having a universal agreement of what AGI is matters, like at all.
>But the finish line promises a significantly better future for (potentially) all
Citation very much needed. Maybe better for the ultrarich owners, but for the billions out of work trying to find how they're going to get food and healthcare? No.
>Adaptation is/will be required, and it’s not going to be easy.
"Some of you may have to die, and that's a sacrifice I'm willing to make" [1]
[1]: https://www.youtube.com/watch?v=hiKuxfcSrEU
Sounds like you have an issue with our current governance structures which allows too much value capture to proceed to the ultra wealthy untaxed, despite them relying on the society supported by said taxes to generate their massive wealth.
This will be true whether you’re for/against AGI development. You’re not solving the underlying problem, you’re just kicking the bucket down the road for others to deal with.
AI technology is dramatically accelerating these trends and enabling new ways and is itself a problem.
There is a common trope of claiming technology isn't a problem that simply isn't true. Cheap and available technology absolutely changes models. In the 1960s the Stasi could pay a bunch of people to monitor cameras and microphones but it was incredibly expensive. Cheap cameras, cheap hard drives to store footage indefinitely, and cheap image/voice recognition all enable new horrifying forms of control, surveillance, and punishment even if the actual capability to watch people is not new.
I literally have no idea what we disagree on?
We're saying the exact same thing.
I'm just saying that cameras were always going to become cheap. So were hard drives, and image/voice recognition. This was always going to happen, and always will happen, because it is a net good thing overall. However, it also exposes new means of exploitation that were did not exist before. AKA, technology solves two problems, and creates a brand new one.
It is futile (and dare I say useless) to cry fowl that technological innovation is happening. Instead, we should be revisiting our existing governance structures and remodel them according to the new reality we live in.
You're arguing against things I didn't say at all. You should re-read my initial comment and try to not project <groupsay> onto it.