sofiechan home

A Coming "AI" Correction/Winter?

anon_mibo said in #4003 6d ago: received

With Facebook apparently making multiple $100M cash buyouts of individual AI researchers, billions and billions of investment dollars pouring in to AI related industry, and a general atmosphere of extreme hype, one starts to wonder where the matching profit is going to come from. Oh sure if you spun up an actually fully capable PhD equivalent remote-worker AI employee tomorrow you might be able to charge thousands of dollars a month and sell subscriptions by the millions. But how close are we to this actually? And what happens if it doesn't materialize as quickly as the industry is betting on, or the economics are no good?

My general impression of LLM-based AI has been somewhat disappointing. They are certainly useful for a lot of things, but not *that* useful. I pay $20/month. I wouldn't pay $200, and not because I don't have a lot of work to do. My friends who run more serious companies betting on this stuff have started quietly saying it's just not there yet, that they can't maintain coherence and reasoning quality outside their trained domain. An investor reports to me that the best people are reporting about 20% speedup. Very smart programmers are making a lot of noise challenging people to show actually impressive results seriously accelerated with LLMs, but it's unclear to me if there's a response. There have been a few impressive stunts, but I've not seen anything that convinces me it's more than a fractional speedup for already existing talent and teams, let alone a full on replacement. YCombinator seems to be all in, but is it delivering?

Then we have the technical angle. Token prediction in principle could mean a full world simulation that understands everything and is fully intelligent. Or it could mean a glorified markov chain. Which is closer to reality? The transformer model feels architecturally much more like a glorified markov chain with some tricks to fit an interpolation curve instead of a lookup table. Impressive in its emergent capabilities, but also noticeably lacking. Can it be patched with reinforcement learning, "reasoning tokens", data augemntation? Somewhat. But just patched. The architecture is still effectively a sort of markov chain. It captures some regularities but not all. It's just not the right *shape* for AGI. The lack of ability to operate outside of its interpolative trained domain seems like a permanent cap on the autonomy of LLM agents, relegating them to highly-managed junior roles, not trusty expert employee-agents.

Meanwhile the reveue growth of Anthropic and OpenAI is pretty impressive. The main show has been code generation, menial data munging, entertainment, and spam spam spam. Someone must actually be figuring out how to use these things, but I hear the economics are still negative. The companies are betting on durable monopoly positions and the ability to back off the capex, and increase margins. But token interpolation seems like a commodity service with low switching costs, constant threat of disruption, and low margins. The most durable business here is the low-IQ entertainment companion that develops a codependent relationship with the user. LLM psychosis and Grok's robo-waifu are early indicators here. But if *that's* the business case and not transformative AGI, a lot of the investment thesis collapses.

There's a lot of value here don't get me wrong, but the economics are unclear and I can't shake the feeling that the investor hype has been based on a speculative scifi narrative that the technology isn't ready to live up to. It's reminiscent of previous tech bubble crashes. The markets are insane and often counter-fundamentals, and most of the big labs just got fat $200M contracts from the military which will help smooth things out, but even the fact that they sought that out is suggestive of the underlying situation.

What do you guys think? Is it time to short AI, or is this just a slight moment of uncertainty on the straight road to singularity?

referenced by: >>4014 >>4077

With Facebook appare received

anon_qypy said in #4005 6d ago: received

My guess is that the current state sort of rhymes with the position of the internet in the late 90s. The underlying tech is reaching the top of its S-curve, but it hasn't yet been integrated into applications and daily life, and most of the work of unlocking its potential will be done over a decade or two. Right now there's a lot of wild hype and a lot of crazy overvalued companies. At some point the bubble will pop, at least in a financial sense, and a lot of companies with bad economic foundations will go bankrupt. But the underlying tech is still there and has a lot of potential uses. Some people are going to survive the bubble, some people are going to start new projects, and the actual rate of cool and powerful things being built with LLMs isn't gonna change that much despite the financial crash.

referenced by: >>4008

My guess is that the received

anon_guwu said in #4007 6d ago: received

One business case for companies like Meta is that while LLMs might not produce radical technological breakthroughs, they might be globally necessary in things like search engines and social media / other entertainment apps. Google needs Gemini to stay at all relevant in search. I know I'm already using LLMs for most of what I used to do with Google.

I don't know what Meta sees with Facebook and Instagram, but I wouldn't be surprised if Zuck thought that he'd end up reliant on a competitor for an essential service if he didn't bring it in-house. So it could be more of a defensive move.

After getting used to AI tools for coding (Cursor, Claude Code, and o3 are my normal stack) it certainly /feels/ like it would be impossible to go back. But, the faster and more cleanly I need to do something the more likely I am to go in and write it myself. A lot of the appeal is ease -- most (but not all) things Claude Code can do in 10 minutes I can do in five, but it's less taxing (and gives me time to do non-coding things like stay on top of comms) to just leave it to Claude.

At this point it could certainly still be cope, but I've only become more confident that it actually makes a difference over the past 4-odd months. I also expect it to get better over time even if the base models don't improve because the tooling matters a lot.

On the whole I expect most developers will end up using generative LLM tools ~forever, or until the profession of programming gets disrupted more thoroughly by something even stranger. I expect the same thing will happen in professions like marketing and accounting*. This can be true without significant gains in productivity, and potentially involves huge lock-in and ecosystem effects.

* see Google adding AI to sheets https://workspace.google.com/resources/spreadsheet-ai/

One business case fo received

anon_vifo said in #4008 6d ago: received

>>4005

Agreed with this poster. It's almost as if the concentration of capital in a few companies that've been around prior to the LLM boom has actually *frozen* imaginitive and creative use cases from getting developed, crystallizing us in the 2010s. They're penetrated by EA tards, they train their models on Reddit, and embrace scyophantic AI personalities, presumably out of a desire to placate the lowest common denominator user. There are low-hanging fruits that could be picked to deliver a higher quality product all over the place and all these billions seem funneled into gaming the benchmark tests.

It will take a burst bubble and a flow of capital into companies whose incentives are not aligned with building goonbots to see real progress. All the money slushing around right now seems like it's circling the drain.

For what it's worth, in the meantime I improved my LLM experience drastically by giving it the following commands: do not compliment me, do not speak to me in a familiar manner, write your answers in short paragraphs rather than bullet points and lists, and please write in a style similar to Pliny the Elder -- but do not overdo it with metaphors and comparisons.

referenced by: >>4010

Agreed with this pos received

anon_bata said in #4010 6d ago: received

>>4008

are you an AI bot? the statements you make are true but say nothing

one thing I think LLMs cant do well is 4chan early 2000s internet texting, but maybe someone will train/fix it

The winter is text itself as an online communication medium. I almost never seriously consider or read what other people send as text anymore, because they could've CTRL + V, keybind expanded it, used Grammarly or changed text tone

idgaf if someone seems rude, the alternative is wasting your time typing to nobody

saw this on hn a while ago but LLMs are a DDOS of written text anywhere on the internet

referenced by: >>4017 >>4018

are you an AI bot? t received

anon_lame said in #4014 6d ago: received

>>4003

> Token prediction in principle could mean a full world simulation that understands everything and is fully intelligent. Or it could mean a glorified markov chain. Which is closer to reality?

We're at a point where AI solves freshly-written IMO and IOI problems. It's pretty clear which of those two options is closer to the truth.

Most individual AI companies are still overhyped and ephemeral, of course. But AI as a technological phenomenon seems about appropriately hyped.

In this way, the "internet in the late 90s" is an apt analogy. Most "dotcoms" from that era are long dead, but two of them were Amazon.com and Google.com, and the internet as a whole has terraformed the economy and people's daily lives more thoroughly than almost anyone imagined even at the height of the bubble.

What does this mean for us? What can we learn from history?

1. Timelines will be long (~decade scale, not ~year scale)

"AI 2027" and its ilk are wrong in that they are self-serving wishes. No, the singularity is not going to happen imminently such that the fate of the world hangs on the wisdom of a few Berkeley alignment researchooors in the coming months. Just as the late-90s wild "Dow 40,000 by Y2K" predictions were wrong because those, too, were self-serving instant-gratification wishes. The world has inertia and does not rewrite itself overnight. But it can and will be be rewritten in time. The Dow is at 44,000 today.

2. Outcomes will be large

We do, in fact, have machines that can think. The median outcome of that invention is another world-rewrite on the scale of the internet. The right tail potential outcome is more extreme than that. Either way, it will take a decade+ to play out. If history teaches us anything, it will not be a simple quick up-only endgame controlled by a few actors you could name today, but rather a chaotic evolutionary process of Cambrian explosion and culling with winners that mostly do not yet exist.

referenced by: >>4052

We're at a point whe received

anon_teqi said in #4017 6d ago: received

>>4010

Everyone thought it was gonna be deepfakes, but no, it's writing and text itself we can't trust anymore. Your LinkedIn connection request or your Hinge match could be a bot. A few will have LLM psychosis, but everyone else's gonna go outside and touch grass now cause there's no point reading online anymore.

Everyone thought it received

anon_vifo said in #4018 6d ago: received

>>4010

If you're reading text by someone who uses Grammarly both of you are retarded.

If you're reading te received

db said in #4052 5d ago: received

Everyone in this thread is mostly on the money in my opinion, and most especially
>>4014

That said I feel I have to bring up the "worse is better" explanation. Namely that something being simple and cheap gives it the ability to attract mindshare such that its success in the market becomes almost a self-fulfilling prophecy against higher quality, more efficient but more expensive competition.

In other words, even without providing a 20% productivity boost, LLMs might have enough going for them to be become a dominant computing paradigm.

See https://dreamsongs.com/WorseIsBetter.html

Everyone in this thr received

You must login to post.