When you read the title, "Why everything feels broken and what can we do", it is obvious op turned a me problem into an us problem.
It's a common pitfall to take yourself out of the equation when you are about to make assertions without evidence to back them up. The title could have been written as:
"Why do I feel everything is broken, and what can I do about it".
you will end up writing a completely different post as a result. a more sincere one, and one that is closer to the reality of the situation that needs to be addressed, and that is one's perceptions, biases, and feelings that come from our own personal experiences.
> The title could have been written as: "Why do I feel everything is broken, and what can I do about it".
Sometimes collective action is required to fix an issue. Restricting people to individual action is depriving them of their freedom of association, a core enabler of democracy.
Both of your comments have a grain of truth, just doesn't apply to this scenario.
what halayli wrote is spot on in this context. If you're writing a blog post and call it a day, then you're not trying to change the status quo. If he's giving talks, seeking out interviews etc to address this issue, you'd have a basis to argue your points...
But like this? No, it doesn't hold up.
It's super normalized however, and not specific to this blog.
Most blogs seem to call on the reader to do things. And even if every reader did, you still wouldn't have changed the status quo, because that needs a way bigger investment.
Seems like you've fixated on a detail about the title and have ignored the substance of the post. Whether or not some everyone agrees that "everything feels broken" doesn't really matter.
No, I think the original post has the right idea. I think attempting to individualise this to your extent would dilute the original message and make the article slightly pointless
Well yes, but you then deny people a collective position. Please consider that we live in a world where strong corporate actors massively profit from us not having a collective position. Shifting responsibility to the individual is one of their strongest tools in the toolbox to ensure problems do not get solved.
I think it is important from time to time to exit the world of individual responsibility ("What can I do to reach the moon") and enter the world of collecrive organization ("What can we do ro reach the moon?"). You probably understand why.
When the central thesis is that the brokenness comes from systemic issues there is very little a lone individual can do. That's the fundamental issue when it comes to systemic issues. The alternative article you suggest would therefore not be interesting.
It is strange finding this comment at the top given its fundamental misunderstanding of TFA.
Well certainly, some of us are all in on psychopathic material accumulation and so of course in that case the only things that are broken are those that stand in your way, such as morality or society.
Lots of bold assertions without evidence. Author claims Google's high salaries killed entrepreneurship. Is there any data to back this claim?
Or the idea that democracy can't adapt to social media discourse; not everyone is chronically online. Politicians still respond to public sentiment to similar degree as they always have.
Then there's this:
> AI systems aren't just tools—they're deployed faster than we can develop frameworks for understanding their social implications.
If they aren't just tools, what are they? Why do we need a framework for understanding their social implications?
Post feels like a fever dream of someone who fell asleep to the Navalmanack audiobook.
Your broader point may stand but that’s not a counter argument if you give the original claim the benefit of the doubt: it can reasonably be interpreted as referencing lifestyle oriented businesses, niche b2b companies, anything small growth with a low ceiling.
Counting unicorns only serves to bolster that point : those are large VC fueled ships which operate on a completely different level. Because it is clear at founding time which type of company they are , it’s reasonable to include that as a qualifier.
Now if the data showed more small-growth companies were started , that’d be a stronger counter argument.
What do businesses need? Startup capital. What’s a good way to obtain it? Having a high salary and saving.
The counter argument is that high salaries enable more people to save and take risk rather than just those that start out wealthy.
The main counter is that people get comfortable and don’t take risks is definitely true, but I’m not sure how much that impacts the amount of entrepreneurship that would have otherwise happened.
This makes sense so long as you define “entrepreneurship” as the act of making a legal entity with the goal of using venture capital to hit a billion dollar valuation as quickly as possible.
In fact since the number of billion dollar valuations goes up by an order of magnitude every few years we are on track for every person on earth to start a billion dollar corporation in a few short decades. This is proof that you could see on Wikipedia that entrepreneurship is doing great
Given the frequent "it's not X—it's Y" type of constructions, lack of researched data, and the em-dashes, unfortunately I think this is the fever dream of a GPU cluster humming away somewhere.
The only thing worse than the author claiming that is you asking for "data". Like what data? This is qualitative. And yes, Google killed one particular flavor of startup that paid its employees with smoke
It’s always kind of funny when people respond to a subjective opinion with “where’s the data?” because unless you’re responding with data it’s pretty much just a way of saying “I don’t like this”
The blog post makes a bold claim about how the rise of tech salaries at Google killed entrepreneurship. The poster could have at least given this a sniff test: how about, for example, showing a graph of Google salaries overlaid with a graph of the number of start-ups from the crunchbase database? As he didn't even go to this minimum effort, it's valid to ask "where's the data" here.
This is a good point. When somebody says something “killed” an intangible thing, they are obviously speaking literally. Honestly we don’t even need a graph, surely if he’s alleging a murder he would be able to produce the weapon, like a knife or a gun
Now obviously someone could mistakenly interpret that sentiment as “incredibly high salaries attracted talent of such caliber that they could have started their own companies but did not” but that simply can’t be possible because there is no graph that shows all of the businesses that didn’t happen, therefore it is literal.
It’s not like anybody on here has worked with somebody that could easily have started their own company but chose SWE at a FAANG. It’s just not a thing
> And yes, Google killed one particular flavor of startup that paid its employees with smoke
To me, it reads like a desperate far-fetched argument to deny employees a fair compensation for their work. As if there is any virtue in stiffing people out of their paycheck.
With a lot of fancy wording, the article basically proposes that slow-moving, bureaucratic educational institutions should catch up with TikTok’s latest algorithm, helping raise the next generation of influencers.
Yes Google’s high salaries are part of a system that has been reworking entrepreneurship in Silicon Valley. This has been documented and discussed at length. Did you look for data?
Big tech pays in valuable stock, and salaries can reach upwards of 500k for relative rank and file positions (not rare one offs). Over a decade, that’s $5M. At the same time, VC firms have been holding companies private longer raising more rounds, which often dilutes the employee shares and reduces the “reward” for employees waiting for an IPO. If that new diluted IPO rewards an employee under $5M for a decade of employment, they were better off at Google/Meta/etc. Startups were always a lottery ticket, but if a “winning” ticket is less profitable than not playing, why join at all?
This plays directly into the thesis that the powerful are extracting additional resources at the expense of the cultural expectations and understandings. VC firms diluting employees is profitable for VCs, but it jeopardizes the Silicon Valley startup ecosystem if smart people prefer better compensation. Same with the recent AI aqui-hire controversies like Windsurf. Why join a startup if the CEO will take a billion dollar payout and leave the employees with worthless stock.
I wrote a lot more below, but I think you've picked up on two important things here:
1. This post seems likely to have been written with the assistance of a chatbot, which explains this phrase. The "they aren't just tools" phrase is only there because he needed a third example for a "it's not just X--it's Y" paragraph, one of ChatGPT's all-time fave constructions.
2. Another cause is more fundamental: the third "leverage" doesn't really apply to this discussion IMO. It's probably useful elsewhere, but owning a large social media company is just a different variety of capital, not some mutually-exclusive group. All expressions of capitalist power in 2025 have some amount of technological amplification going on.
>Why do we need a framework for understanding their social implications?
I would guess that the argument for wanting a framework for understanding language models’ social implications would be the social implications of language models. Like a phenomenon existing is a valid argument for understanding it.
Do you have data to support this? What does “chronically” mean and why would its absence invalidate the idea of social media impacting how people act and vote?
This is a blog post not a research paper. The evidence is anecdotal and most people with common sense know it by intuition all the facts that the author mentions.
What a strange article. It feels like one of those cases where the author gathers so many of the pieces, then, just... fails to solve the puzzle. Or get anywhere close.
I agree that everything feels broken. I'd like to do something about it. Let me deploy my leverage to work on that. Here's my "labor leverage", right here, this comment. Check. Leverage strength... not much. Let's bring my "capital leverage" to bear... okay, done, my 401(k) is invested in my favorite companies. Did you notice? No? Okay, leverage strength... let's go with epsilon^2. And my "code leverage"... uh... I don't think I have any.
So, wait, I, personally, don't have any third-order leverage at all? How am I supposed to go up against trillion-dollar, billion-node networks, with my epsilon^3 "leverage"?
That's the real problem: I don't actually have any meaningful leverage. I'm not in the game.
Is that actually true? Doesn't matter: I believe it to be true. Sure looks like everything is broken to me.
This is about hyper-scaling unicorns who can create and scale a business faster than the regulatory framework[government] can respond. "Disruption" is a dog whistle for "we're going to break the law faster than the government can keep up." The game is scaling from disruption to entrenchment before the first "The United States vs You" hits your desk.
If you don't have the ability to play the game then you can believe it doesn't exist, but those playing it found a slot machine that only lands on 777.
The article does not really fit that piece into the framework of "why do people in society feel that things are broken?" (which your comment seems to do just fine). Hence my complaint about the article.
Unfortunately sibling comments have exposed this article as likely to be AI slop, which would explain pretty everything about it. Now I am reassessing my own ability to be taken in by slop, and that's depressing in different ways....
Engineering/STEM training doesn't have what is required to fix such problems. So they have to get involved with multi-disciplinary groups or nothing useful will ever get built that actually fixes things.
This is why lot of people within tech/science circles feel lost and defensive about their work. They barely understand anything about the humanities/social sciences.
Consciousness of what is missing is increasing slowly, thanks to the info tsunami the internet has unleashed.
But that info delivery architecture relies on pseudo experts and celebs whose survival depends on collecting views, and is delivered to the mind in such random order, with high levels of over stimulation and noise, that it creates even more confusion.
What's missing?
No foundations in Philosophy. No idea where Value Systems come from. No idea how they are maintained - learn - adapt to change. No idea why all religious systems train their priests in some form of "Pastoral Care" involving constant contact with ordinary people and their suffering.
So the Vatican survives the fall of nations/empires/plagues/economic downturns/reformation/enlightenment/pedo scandals etc but science/engineering orgs look totally helpless reacting to systemic shocks and go running to Legal/HR/PR people for help.
That's at the org level. At the individual level, most tech folk pretend the limitations/divisions of their own brain/mind don't exist and have no impact on what they build. There is no awareness of what Plato/Hume/Freud/Kahneman have to say about it and how those divisions of the non united mind and denial of it effect what gets built. And since the article mentions systems running at different speed think about the electrical and chemical signaling in your own mind. Are they happening at the same speed?
So don't try to work all this out by yourself. Multi-disciplinary groups are our only hope. If the org is filled with only engineers, history already shows us how the story unfolds.
Bureaucracy and institutions are by definition slower. It has always been like that.
If they want to have more impact, they need to adapt to more market-like techniques.
Whether an institution moving faster would be good or bad, I am not sure actually. Probably moving too fast for something that belongs to "everyone", with all kinds of heated opinions, etc. is not the best place to move fast.
OTOH, and this is as a spanish (I do not know enough about the specifics of America), I feel that nowadays part of the insritutions are "injected" changes that the society is not demanding from them. Destroying the established base and traditions of the (in this case) society. Social engineering and influencing campaigns I would say.
Maybe this is not the main topic of the article. Just was a brain dump I was doing about observations of my own.
These things have always existed actually. It is just that with technology and many people using it, information from individuals, etc. this influence is probably made more effectively.
> Something fundamental has shifted in how power works, and most of our institutions haven't noticed. We're living through what might be called "leverage arbitrage divergence"—a growing gap between how fast some actors can change the world and how fast others can respond to those changes.
I disagree that the way «Power» works has fundamentally shifted. This is a classic pattern of hegemony/insurgence/counter-insurgence.
Excellent observation. But I believe your proposed solutions could be strengthened. Improving “leverage literacy” may be a first step, but the fundamental mismatch in velocity will continue expanding the gap between technology and social institutions.
The question then becomes, how do we increase the velocity of social institutions to keep pace with technology? Balaji’s blockchain native societies come to mind. The comment in the thread about needing philosophical roots in engineering is interesting too. Curious what you think.
On the end paragraph, it became very obvious that a lot of this was AI generated. Please, speak in your own voice! The message of this article is almost completely ruined by the use of ChatGPT as a writing crutch.
I know I shouldn't care at all. However I can't help but feel a little bit sad knowing that a (I suppose) once genuine blogger simply gave up and decided to delegate his thinking process to a token machine.
I hate AI writing, but I get it on pointless B2B marketing & awful linkedin posts, they're worth nothing, and it's a great way to block and ignore an account.
But the fact that blog trying to communicate an interesting idea is using a predictive machine to word things is just so depressing. Is it your blog, or is it chatgpts? Did you come up with the term leverage arbitrage, or did a predictive model?
Did OP just provide an outline and let the predictive model write the entire thing? It just feels sad.
Personally I do care about punctuation, it's part of the (my) style and presentation, the way to convey meaning, etc. Still a reference to a classic:
https://news.ycombinator.com/item?id=7865024
Oh I do care punctuation (at least when I'm writing long-form stuff). I mean I shouldn't have cared enough to look at the past articles or comment here. The proper response to AI submissions is ignore and optionally flag.
Agility is the one trait by which a new entrant can unseat incumbants.
Lose that, and you'll be stuck in a stagnated first past the post world.
Does that mean all new is good and old is bad? No. And 'hypernovelty' has huge problems as it leaves no time for individuals nor society to adapt. But tread carefull with what you whish for.
> making employment more attractive than company-building
This is decidedly untrue. You can make a few clever(er) points here: VC meddling is rotting at the core of ingenuity (which I happen to agree with) or even that large companies (GOOG/MSFT/etc.) are tangentially capturing startups via incentives (think free credits, etc.). But author doesn't make these, so I won't argue against (or for) them.
> Today, higher-leverage actors are strip-mining institutional commons—democratic norms, social trust, educational relevance, economic mobility—faster than lower-leverage actors can regenerate them.
This will seem like a technicality, but for a pedant like myself, it's quite important: this is absolutely not the tragedy of the commons. It might be a new tragedy (maybe we'll call it "theft?"), but the interesting and paradoxical nature of the original has nothing to do with this reformulation. What makes the tragedy of the commons interesting is that all agents will favor maximizing their local maxima in spite of minimizing the global maxima. A sort of "missing the forest for the trees" thing.
> The stakes couldn't be higher. If we don't develop conscious approaches to managing leverage arbitrage, we risk a future where technological capability advances exponentially while social coordination capacity deteriorates linearly—a recipe for civilizational breakdown.
This seems a tad dramatic; "leverage arbitrage" was significantly higher during colonial or industrial times and society didn't exactly collapse. I agree with the sentiment, but not sure about all the "боже мой."
> helping people recognize the type of power they're actually wielding and use it more consciously
lol, lmao even. I don't even think this would work if there wasn't capitalism in play.
> Most importantly, we need to redesign how we measure success and allocate resources.
Oh, they Are suggesting we dismantle capitalism.
Even if we were to dismantle capitalism in America, TikTok is a Chinese business/product. So this has to be a global solution. While we're wielding our lever that can move the globe let's fix global warming, war, hunger, etc while we're at it. No point in half measures.
Remember that for the vent majority of human history, most people have not felt like they were truly in control of their lives. If it was not God, it was the king. Remember that nearly every animal except humans dies by being eaten. We and our pets are nearly the only creatures that die in their sleep. To suggest that the world is supposed to “work” in a way that leaves us feeling un-anxious is a modern sense of entitlement. For most animals, and most humans the only thing you were entitled to was to scrape by desperately until you finally, eventually, failed and died.
Our institutions were more stable before because people were more dedicated to maintaining them. They were created because overt pain or death was the alternative. But then they began to protect us from discomfort. Then from fear of discomfort. And for discomfort that exists in the mind. This was bound to come apart around the time some people asked to be afraid simply said “meh”
Will this go badly? Certainly from the perspective of whether people feel safe and content. But we have had 10,000 years of self generated strive from humans. It is our fuel.
While the reasoning I believe to be utterly littered with weak arguments, I agree with the final conclusion/question of the author.
But what the author isn't clearly saying is that he is describing a free market opposed to a planned economy, even if he doesn't specificly say that, that's the implication of the final question.
And I am all for planned economy. I want to cooperate with my human fellows, not compete to crush them. In this day and age it is obsurd not to this. The technology is there, we just need to use it the right way. I know many will do a knee-jerk reaction to this statement and start spitting out the old anti-soviet economy arguments about planned economy, I'm bracing.
If you want competition then accept the social outcomes. Wanting to equalize competition through regulation is exactly like banning war crimes. When war is raging all kinds of war crimes will be commited.
I'll disagree. But fundamentally the problem of a centrally planned economy is whether or not you agree with the plan.
I worked in the defense industry, specifically on weapons meant for export. At the time, I was considered a subject matter expert. Something that eventually got me selected to travel to a foreign nation that had purchased an update to that weapon we sold them.
This country had an active internal conflict at the time, and the soldiers I was training had used the previous iteration of that weapons system. Prior to flying out my peers had forewarned me about was that these young men loved to try to horrify us soft westerners. I was no exception of course. For me, when we were relaxing over dinner, one of the soldiers felt it was my turn. Showed me photographs of a family of corpses corpses. 5 adults. 4 children, the youngest I guess being an infant and the oldest I think was probably no older then 8 or 9 years old. A multi generational household, all civilian, all unarmed. And here he was telling me in unabashed gruesome detail about how had murdered them during a patrol, and speaking as if he had brought justice to an ancient crime. Using that older iteration of weapons I had helped create.
But I had my marching orders. No actions or words that could potentially lead to an incident. Whatever I felt got buried by black humor and a few tips on how they could do it more efficiently in the future. Later on I heard my callousness had earned modicum of respect.
Knowing what I just told you know, would you feel comfortable in your planned economy if you were ordered to join my team and work on improving that same weapon to the best of your ability? Would you say that it is the right way to use the technology we are building? Knowing who it was going to go to?
We have a planned economy, just one planned by capitalists.
It doesn't seem to matter whether an individual works for the weapons manufacturers, they're going to build them with my tax dollars and sell them to monsters whether i like it or not
We already have a planned economy in a lot of ways anyway. We just don't have much collective control over who governs those levers. Particularly when it comes to resource allocation. VC might look like a free market approach on the surface, but it gives huge power to shape the economy to unaccountable un elected randos who likely "come from money" with a lot of skeletons in their closets. Even if they don't, having that kind if power with essentially no checks is a recipe for disaster.
We don't have to copy the Soviets exactly, but it's foolish to pretend like their system didn't work. Sure they killed and imprisioned people, but we do the same on at least the same scale. Socialism at least in principle isn't specifically aiming for that outcome. Whatever we're doing here kinda does. Our country would crumble if the war machine ever stopped. Changing our approach to anything is just too expensive to consider and dissent can't be tolerated for the same reason. What options do we really have?
An economy is only free if left uncontrolled. IMO emergence of "cartels" and colluding entities (including politicians), have destroyed the notion of truly free markets.
Interesting and well written, thanks for posting! That said, I think there's ample reason to file this under "writer steeped in SV-coded libertarianism unintentionally rediscovers the arguments in favor of socialism."
The framework begins with a simple observation...: there are three types of leverage.
I get where he's coming from, but "leverage" (AKA competitive advantage in a market?) seems like a woefully inadequate framework with which to understand the overall direction of society. Regardless: What's a "systematic" growth rate...? I'm obviously a math noob, but I've never heard of that, and don't see it mentioned on wikipedia: https://en.wikipedia.org/wiki/Growth_rate_%28group_theory%29
In general this seems like a good summary of the most Marxist dichotomy of them all-labor vs. capital-with "code" tacked on to the end even though it doesn't really fit. I've cut a lot of responses to the examples below, but I think it's easy to pick out the ones that don't fit once you start looking -- I leave it as an exercise for the reader ;)
Today, higher-leverage actors are strip-mining institutional commons—democratic norms, social trust, educational relevance, economic mobility—faster than lower-leverage actors can regenerate them.
Another way to say this would be[1]: Constant revolutionising of production, uninterrupted disturbance of all social conditions, everlasting uncertainty and agitation distinguish the bourgeois epoch from all earlier ones. All fixed, fast-frozen relations, with their train of ancient and venerable prejudices and opinions, are swept away, all new-formed ones become antiquated before they can ossify. All that is solid melts into air, all that is holy is profaned...
The leverage arbitrage creates a vicious cycle. As the gap between different leverage types widens, traditional coordination mechanisms become increasingly ineffective. This drives more actors toward higher-leverage approaches, accelerating the divergence. Meanwhile, the institutional commons that make civilization possible—shared truth, democratic discourse, economic mobility, social cohesion—continue degrading because they depend on lower-leverage maintenance that can't compete with higher-leverage extraction.
This seems to be the main thesis, which I'd rephrase as: "Accumulation of wealth at one pole is, therefore, at the same time accumulation of misery, agony of toil, slavery, ignorance, brutality, mental degradation, at the opposite pole, i.e., on the side of the class that produces its own product in the form of capital."[2]
But understanding this dynamic also suggests solutions. Instead of trying to slow down technological change or somehow make institutions faster, we need "leverage literacy"—helping people recognize the type of power they're actually wielding and use it more consciously.
And this is why I felt compelled to write a long response in the first place. The solution to "the capitalists are controlling everything and I don't like the results" is to band together as workers and exert control over the situation! The stereotype of libertarians as the kind of people to offer the destitute "financial literacy" classes instead of help is a trite one, but clearly not obsolete...
Most importantly, we need to redesign how we measure success and allocate resources. Current systems reward short-term optimization within single leverage types while ignoring long-term effects across leverage types. The result is systematic underinvestment in the institutional commons that higher-leverage systems depend on to function sustainably.
Yes, I agree -- we need to greatly reduce/eliminate the power that shareholders have to prioritize YoY equity growth over the needs of society! Another way to say this would be "As capitalist, he is only capital personified. His soul is the soul of capital. But capital has one single life impulse, the tendency to create value and surplus-value, to make its constant factor, the means of production, absorb the greatest possible amount of surplus-labour. Capital is dead labour, that, vampire-like, only lives by sucking living labour, and lives the more, the more labour it sucks.
"[3]
Or, in the words of noted anti-Capitalist Sam Altman: "We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project."[4]
The choice isn't between technology and tradition—it's between conscious coordination across leverage levels and unconscious optimization within leverage silos.
This really sums our disagreement. He thinks the solution to the problems he details is to coordinate more closely with the capitalists -- I feel confident that it's to disempower them. To put it in libertarian terms: their incentives are not a good fit for this situation!
Thank you for the truly thoughtful comment. I’m not OP, and I have the polar opposite political/economic opinion. But I come to HN for interesting, intelligent perspectives, and you have provided that. Thank you comrade.
Or maybe it just feels that way because the world is being fixed at unprecedented rates?
For example I had the misfortune to needing help from the mental health community a decade ago. I sometimes think about how much easier that part of my life would have been if TikTok and Claude had existed back then. It’s easy to forget, but many of the institutions we’ve built over centuries are deeply dysfunctional and very much in need of disruption.
The night often seems darkest just before dawn.
(With that said, I agree there are risks of the kind you describe.)
I don't think you should be getting mental health advice from LLMs, it has been shown that their sycophantic nature reinforces your own diagnoses not that of a professionals. Using an LLM in a field you're knowledgeable in would make you think twice about their accuracy in fields you are unable to know the correctness of personally. It will all sound very convincing, but may be a fundamentally flawed diagnoses.
Another way of thinking about, a non-trivial amount of training data is internet comments and blogs, which have an alarming amount of self diagnoses, non-professional diagnoses, and totally fabricated facts about mental illness.
I agree. I was using an LLM to test some assumptions I was making about a chemical combination, and although it's reasoning was logical it made an error about how pH works when you combine two acids of similar strength then it held onto that incorrect conclusion for the rest of the chat.
Since I wasn't asking anything related to pH, I skimmed past that section and didn't notice the error until much later in the chat when the LLM decided to build upon erroneous reasoning.
I think someone who hadn't studied chemistry would've relied on its' answer since all the rest of the logic would've been correct if the solution really did become more acidic.
Agree, pointing to TikTok and LLMs as a “disrupter” to the prevailing psychotherapeutic industry is crazy town. Those media reify / amplify the exact same crazy of their training set into absurdity.
To be clear I have nothing against the prevailing psychotherapeutic industry. I’m very thankful for that part of the mental health community. I think it’s rather well-aligned with TikTok too, btw.
> I don't think you should be getting mental health advice from LLMs, it has been shown that their sycophantic nature reinforces your own diagnoses not that of a professionals.
And therapists don't? All the data used in diagnosis is self-reported feelings. All the progress made from seeing a therapist is also self-reported feelings.
Random chance might make more accurate diagnoses than any based on self-reported feelings.
To an extent, but the words you give an LLM are the entirety of what it has to go on, and again they are sycophantic. If you tell it you have depression, and you insist, it will do its best to agree. In the exact same conversation, you could later convince it you have schizophrenia instead. A human wouldn't buy it.
A trained psychologist is going to use their procedural training to diagnose you. Not text input, they are asking you questions with subtext, and you may not even realise what they learned about you from your answer. With an LLM, you are loading it's context with your world view and it will go off that.
Replacing expert opinion with engagement-baiting content from feed machines and hallucinating matrices seems to me is a part of the problem, and not the solution.
Especially using TikTok to try and improve mental health issues seems a bit like trying to fight fire with a (edit: spelling) hose of jet fuel.
There's certainly no shortage of issues with the mental health professional field, that much is true. I hope you are doing well regardless. I suspect we're going to see this story play out a lot, where the limitations of AI should be a major limiting factor, but people will get results anyway.
I imagine it will rely a lot on the pilot, and how well they understand those limitations. Perhaps the bigger risks are those without good understanding of LLMs who just treat it like an all knowing expert human.
If you are only capable of blind trust or distrust then there’s a huge difference between human experts and LLMs (and it’s of course wise to put your blind trust in a human, not an LLM). But if you have more of a ”trust no one” or ”trust but verify” mentality in general then it’s not so clear cut. The LLMs have their advantages. For example every chat with an LLM is an independent sample, whereas once a doctor has diagnosed you it’s very hard to get them to consider evidence contradictory to that diagnosis.
If there is one horror I have it is one of the really mentally unwell people I knew when growing up pairing up with an LLM that feeds into everything they say. LLMs are great for truth seekers, who know how to deal with the
yes-sayer-mentality of the chatbot. If there is one type of person I am 100% sure cannot deal with that it would be mental health patients.
That doesn't mean I don't think LLMs couldn't be used for therapy with great success. Just not general purpose models without a lot of tweaking.
I fully agree that the night seems dark. But having recovered from a state that many institutionalized professionals deemed hopeless I have a different perspective.
Have you seen One Flew Over the Cuckoo's Nest? I think it serves as a good model to explain why we can both be right. You’re looking at the average effect of the institution across every minor character that appears in frame and deem it positive, and it was. I’m looking at what happened to Jack Nicholson and see the institution as deeply dysfunctional. If you think about it I think you’ll agree that I’m not wrong about that.
I agree that institutionalized everything can develope blind spots that makes those institutions work broadly, but can't help (and in extreme cases hurts) individuals within said institutional context — although that doesn't really contradict my point.
Just because getting kicked into the head occasionally helps someone with a neorological defect, doesn't mean we should recommend getting kicked in the head by a horse as a remedy, if you follow my point. It means we need to figure out why it helped those people and maybe find a way that doesn't require an equiestrian.
And in the case of therapy we already have a lot of knowledge what works and what doesn't. Therapeutic psychiatrists typically can't give patients the time and attention they would require for structural reasons, so LLMs have real potential here. But that LLM needs to know how to deal with patients in different conditions. E.g. imagine a patient with a schizophenic disorder convincing a LLM into feeding into their paranoid schizophrenia. My experience with LLMs so far is, that they would happily just do this, if you're persistent enough..
I liked the blog post. I try not to "game the system" and usually get all judgemental about people who do. Okay I am old fashioned. Putting on my Gordon Gecko hat ("greed is good"), of course, it is the disparity between system reaction times that enables gaming it. What system can one develop to get inside an adversary's OODA Loop (observe, orient, decide, and act). I like the idea that it's a (meta)system problem, it's about leverage.
When you read the title, "Why everything feels broken and what can we do", it is obvious op turned a me problem into an us problem.
It's a common pitfall to take yourself out of the equation when you are about to make assertions without evidence to back them up. The title could have been written as:
"Why do I feel everything is broken, and what can I do about it".
you will end up writing a completely different post as a result. a more sincere one, and one that is closer to the reality of the situation that needs to be addressed, and that is one's perceptions, biases, and feelings that come from our own personal experiences.
> The title could have been written as: "Why do I feel everything is broken, and what can I do about it".
Sometimes collective action is required to fix an issue. Restricting people to individual action is depriving them of their freedom of association, a core enabler of democracy.
It’s an old strategy that tries to take agency from people by splitting them up.
If you convince many powerless people to work alone, they aren’t gonna be a threat.
Both of your comments have a grain of truth, just doesn't apply to this scenario.
what halayli wrote is spot on in this context. If you're writing a blog post and call it a day, then you're not trying to change the status quo. If he's giving talks, seeking out interviews etc to address this issue, you'd have a basis to argue your points...
But like this? No, it doesn't hold up.
It's super normalized however, and not specific to this blog. Most blogs seem to call on the reader to do things. And even if every reader did, you still wouldn't have changed the status quo, because that needs a way bigger investment.
Seems like you've fixated on a detail about the title and have ignored the substance of the post. Whether or not some everyone agrees that "everything feels broken" doesn't really matter.
But the substance of the post appears to be word-salad and I've already had my five-a-day.
No OP is right. Everything is broken. Thinking your system isn't and its OPs issue is not sincere.
Actually, when I read this, it was obvious that this issue was an us problem and not just a me problem.
No, I think the original post has the right idea. I think attempting to individualise this to your extent would dilute the original message and make the article slightly pointless
Except the OP is kind of right I feel.
Well yes, but you then deny people a collective position. Please consider that we live in a world where strong corporate actors massively profit from us not having a collective position. Shifting responsibility to the individual is one of their strongest tools in the toolbox to ensure problems do not get solved.
I think it is important from time to time to exit the world of individual responsibility ("What can I do to reach the moon") and enter the world of collecrive organization ("What can we do ro reach the moon?"). You probably understand why.
The framework derived also is based in power dynamic comparisons which are circular, without an objective base.
Its an opinion piece that has no real merit unless you tie it to reality.
When the central thesis is that the brokenness comes from systemic issues there is very little a lone individual can do. That's the fundamental issue when it comes to systemic issues. The alternative article you suggest would therefore not be interesting.
It is strange finding this comment at the top given its fundamental misunderstanding of TFA.
Well certainly, some of us are all in on psychopathic material accumulation and so of course in that case the only things that are broken are those that stand in your way, such as morality or society.
Lots of bold assertions without evidence. Author claims Google's high salaries killed entrepreneurship. Is there any data to back this claim?
Or the idea that democracy can't adapt to social media discourse; not everyone is chronically online. Politicians still respond to public sentiment to similar degree as they always have.
Then there's this:
> AI systems aren't just tools—they're deployed faster than we can develop frameworks for understanding their social implications.
If they aren't just tools, what are they? Why do we need a framework for understanding their social implications?
Post feels like a fever dream of someone who fell asleep to the Navalmanack audiobook.
The number of unicorns was 39 when the term was coined in 2013. Then there was 119 by 2018 and then there were 1284 unicorns by May 2024.
From :
https://en.wikipedia.org/wiki/Unicorn_(finance)#History
So not only is there no data to back up the authors claim but it is wrong and could be checked by looking at wikipedia.
You don't have to be right, you can make claims, but when you make grand claims you should at least check wikipedia.
Your broader point may stand but that’s not a counter argument if you give the original claim the benefit of the doubt: it can reasonably be interpreted as referencing lifestyle oriented businesses, niche b2b companies, anything small growth with a low ceiling.
Counting unicorns only serves to bolster that point : those are large VC fueled ships which operate on a completely different level. Because it is clear at founding time which type of company they are , it’s reasonable to include that as a qualifier.
Now if the data showed more small-growth companies were started , that’d be a stronger counter argument.
What do businesses need? Startup capital. What’s a good way to obtain it? Having a high salary and saving.
The counter argument is that high salaries enable more people to save and take risk rather than just those that start out wealthy.
The main counter is that people get comfortable and don’t take risks is definitely true, but I’m not sure how much that impacts the amount of entrepreneurship that would have otherwise happened.
Not sure about that.
https://news.crunchbase.com/startups/google-stanford-and-the...
This makes sense so long as you define “entrepreneurship” as the act of making a legal entity with the goal of using venture capital to hit a billion dollar valuation as quickly as possible.
In fact since the number of billion dollar valuations goes up by an order of magnitude every few years we are on track for every person on earth to start a billion dollar corporation in a few short decades. This is proof that you could see on Wikipedia that entrepreneurship is doing great
Given the frequent "it's not X—it's Y" type of constructions, lack of researched data, and the em-dashes, unfortunately I think this is the fever dream of a GPU cluster humming away somewhere.
The only thing worse than the author claiming that is you asking for "data". Like what data? This is qualitative. And yes, Google killed one particular flavor of startup that paid its employees with smoke
It’s always kind of funny when people respond to a subjective opinion with “where’s the data?” because unless you’re responding with data it’s pretty much just a way of saying “I don’t like this”
The blog post makes a bold claim about how the rise of tech salaries at Google killed entrepreneurship. The poster could have at least given this a sniff test: how about, for example, showing a graph of Google salaries overlaid with a graph of the number of start-ups from the crunchbase database? As he didn't even go to this minimum effort, it's valid to ask "where's the data" here.
This is a good point. When somebody says something “killed” an intangible thing, they are obviously speaking literally. Honestly we don’t even need a graph, surely if he’s alleging a murder he would be able to produce the weapon, like a knife or a gun
Now obviously someone could mistakenly interpret that sentiment as “incredibly high salaries attracted talent of such caliber that they could have started their own companies but did not” but that simply can’t be possible because there is no graph that shows all of the businesses that didn’t happen, therefore it is literal.
It’s not like anybody on here has worked with somebody that could easily have started their own company but chose SWE at a FAANG. It’s just not a thing
Yeah, I noticed this too. It's a kind of response one would make if one felt threatened by the subject matter.
> And yes, Google killed one particular flavor of startup that paid its employees with smoke
To me, it reads like a desperate far-fetched argument to deny employees a fair compensation for their work. As if there is any virtue in stiffing people out of their paycheck.
With a lot of fancy wording, the article basically proposes that slow-moving, bureaucratic educational institutions should catch up with TikTok’s latest algorithm, helping raise the next generation of influencers.
To the effect it clearly proposes anything!
"billions of influencers"
This isn’t particularly controversial tbh.
Yes Google’s high salaries are part of a system that has been reworking entrepreneurship in Silicon Valley. This has been documented and discussed at length. Did you look for data?
Big tech pays in valuable stock, and salaries can reach upwards of 500k for relative rank and file positions (not rare one offs). Over a decade, that’s $5M. At the same time, VC firms have been holding companies private longer raising more rounds, which often dilutes the employee shares and reduces the “reward” for employees waiting for an IPO. If that new diluted IPO rewards an employee under $5M for a decade of employment, they were better off at Google/Meta/etc. Startups were always a lottery ticket, but if a “winning” ticket is less profitable than not playing, why join at all?
This plays directly into the thesis that the powerful are extracting additional resources at the expense of the cultural expectations and understandings. VC firms diluting employees is profitable for VCs, but it jeopardizes the Silicon Valley startup ecosystem if smart people prefer better compensation. Same with the recent AI aqui-hire controversies like Windsurf. Why join a startup if the CEO will take a billion dollar payout and leave the employees with worthless stock.
1. This post seems likely to have been written with the assistance of a chatbot, which explains this phrase. The "they aren't just tools" phrase is only there because he needed a third example for a "it's not just X--it's Y" paragraph, one of ChatGPT's all-time fave constructions.
2. Another cause is more fundamental: the third "leverage" doesn't really apply to this discussion IMO. It's probably useful elsewhere, but owning a large social media company is just a different variety of capital, not some mutually-exclusive group. All expressions of capitalist power in 2025 have some amount of technological amplification going on.
>Why do we need a framework for understanding their social implications?
I would guess that the argument for wanting a framework for understanding language models’ social implications would be the social implications of language models. Like a phenomenon existing is a valid argument for understanding it.
https://www.theatlantic.com/technology/archive/2025/07/chatg...
https://www.rollingstone.com/culture/culture-features/ai-spi...
https://www.psychologytoday.com/us/blog/urban-survival/20250...
>not everyone is chronically online
Do you have data to support this? What does “chronically” mean and why would its absence invalidate the idea of social media impacting how people act and vote?
This is a blog post not a research paper. The evidence is anecdotal and most people with common sense know it by intuition all the facts that the author mentions.
What a strange article. It feels like one of those cases where the author gathers so many of the pieces, then, just... fails to solve the puzzle. Or get anywhere close.
I agree that everything feels broken. I'd like to do something about it. Let me deploy my leverage to work on that. Here's my "labor leverage", right here, this comment. Check. Leverage strength... not much. Let's bring my "capital leverage" to bear... okay, done, my 401(k) is invested in my favorite companies. Did you notice? No? Okay, leverage strength... let's go with epsilon^2. And my "code leverage"... uh... I don't think I have any.
So, wait, I, personally, don't have any third-order leverage at all? How am I supposed to go up against trillion-dollar, billion-node networks, with my epsilon^3 "leverage"?
That's the real problem: I don't actually have any meaningful leverage. I'm not in the game.
Is that actually true? Doesn't matter: I believe it to be true. Sure looks like everything is broken to me.
What are you talking about?
This is about hyper-scaling unicorns who can create and scale a business faster than the regulatory framework[government] can respond. "Disruption" is a dog whistle for "we're going to break the law faster than the government can keep up." The game is scaling from disruption to entrenchment before the first "The United States vs You" hits your desk.
If you don't have the ability to play the game then you can believe it doesn't exist, but those playing it found a slot machine that only lands on 777.
Right, that's one of the pieces.
The article does not really fit that piece into the framework of "why do people in society feel that things are broken?" (which your comment seems to do just fine). Hence my complaint about the article.
Unfortunately sibling comments have exposed this article as likely to be AI slop, which would explain pretty everything about it. Now I am reassessing my own ability to be taken in by slop, and that's depressing in different ways....
Engineering/STEM training doesn't have what is required to fix such problems. So they have to get involved with multi-disciplinary groups or nothing useful will ever get built that actually fixes things.
This is why lot of people within tech/science circles feel lost and defensive about their work. They barely understand anything about the humanities/social sciences.
Consciousness of what is missing is increasing slowly, thanks to the info tsunami the internet has unleashed.
But that info delivery architecture relies on pseudo experts and celebs whose survival depends on collecting views, and is delivered to the mind in such random order, with high levels of over stimulation and noise, that it creates even more confusion.
What's missing?
No foundations in Philosophy. No idea where Value Systems come from. No idea how they are maintained - learn - adapt to change. No idea why all religious systems train their priests in some form of "Pastoral Care" involving constant contact with ordinary people and their suffering.
So the Vatican survives the fall of nations/empires/plagues/economic downturns/reformation/enlightenment/pedo scandals etc but science/engineering orgs look totally helpless reacting to systemic shocks and go running to Legal/HR/PR people for help.
That's at the org level. At the individual level, most tech folk pretend the limitations/divisions of their own brain/mind don't exist and have no impact on what they build. There is no awareness of what Plato/Hume/Freud/Kahneman have to say about it and how those divisions of the non united mind and denial of it effect what gets built. And since the article mentions systems running at different speed think about the electrical and chemical signaling in your own mind. Are they happening at the same speed?
So don't try to work all this out by yourself. Multi-disciplinary groups are our only hope. If the org is filled with only engineers, history already shows us how the story unfolds.
Bureaucracy and institutions are by definition slower. It has always been like that.
If they want to have more impact, they need to adapt to more market-like techniques.
Whether an institution moving faster would be good or bad, I am not sure actually. Probably moving too fast for something that belongs to "everyone", with all kinds of heated opinions, etc. is not the best place to move fast.
OTOH, and this is as a spanish (I do not know enough about the specifics of America), I feel that nowadays part of the insritutions are "injected" changes that the society is not demanding from them. Destroying the established base and traditions of the (in this case) society. Social engineering and influencing campaigns I would say.
Maybe this is not the main topic of the article. Just was a brain dump I was doing about observations of my own.
These things have always existed actually. It is just that with technology and many people using it, information from individuals, etc. this influence is probably made more effectively.
> Something fundamental has shifted in how power works, and most of our institutions haven't noticed. We're living through what might be called "leverage arbitrage divergence"—a growing gap between how fast some actors can change the world and how fast others can respond to those changes.
I disagree that the way «Power» works has fundamentally shifted. This is a classic pattern of hegemony/insurgence/counter-insurgence.
Excellent observation. But I believe your proposed solutions could be strengthened. Improving “leverage literacy” may be a first step, but the fundamental mismatch in velocity will continue expanding the gap between technology and social institutions.
The question then becomes, how do we increase the velocity of social institutions to keep pace with technology? Balaji’s blockchain native societies come to mind. The comment in the thread about needing philosophical roots in engineering is interesting too. Curious what you think.
I agree with the premise and appreciate the definitions that capture a situation that's still being understood, but the solutions are vapid.
"Awareness" as an option for an information saturated society is not the pathway for anything right now
The articles on this blog before 2025 never contained em-dashes. Not even a single one.
And the articles from 2025 onward all use em-dashes.
Curious, huh?
> Each operates at a different mathematical order: linear, exponential, and systematic respectively.
This was where I stop reading. "Systematic" mathematical order. Sure.
When I saw the classic
It's not X -- It's Y
On the end paragraph, it became very obvious that a lot of this was AI generated. Please, speak in your own voice! The message of this article is almost completely ruined by the use of ChatGPT as a writing crutch.
It has all the classics
I know I shouldn't care at all. However I can't help but feel a little bit sad knowing that a (I suppose) once genuine blogger simply gave up and decided to delegate his thinking process to a token machine.
I hate AI writing, but I get it on pointless B2B marketing & awful linkedin posts, they're worth nothing, and it's a great way to block and ignore an account.
But the fact that blog trying to communicate an interesting idea is using a predictive machine to word things is just so depressing. Is it your blog, or is it chatgpts? Did you come up with the term leverage arbitrage, or did a predictive model?
Did OP just provide an outline and let the predictive model write the entire thing? It just feels sad.
>I know I shouldn't care at all.
Personally I do care about punctuation, it's part of the (my) style and presentation, the way to convey meaning, etc. Still a reference to a classic: https://news.ycombinator.com/item?id=7865024
Oh I do care punctuation (at least when I'm writing long-form stuff). I mean I shouldn't have cared enough to look at the past articles or comment here. The proper response to AI submissions is ignore and optionally flag.
Agility is the one trait by which a new entrant can unseat incumbants.
Lose that, and you'll be stuck in a stagnated first past the post world.
Does that mean all new is good and old is bad? No. And 'hypernovelty' has huge problems as it leaves no time for individuals nor society to adapt. But tread carefull with what you whish for.
> making employment more attractive than company-building
This is decidedly untrue. You can make a few clever(er) points here: VC meddling is rotting at the core of ingenuity (which I happen to agree with) or even that large companies (GOOG/MSFT/etc.) are tangentially capturing startups via incentives (think free credits, etc.). But author doesn't make these, so I won't argue against (or for) them.
> Today, higher-leverage actors are strip-mining institutional commons—democratic norms, social trust, educational relevance, economic mobility—faster than lower-leverage actors can regenerate them.
This will seem like a technicality, but for a pedant like myself, it's quite important: this is absolutely not the tragedy of the commons. It might be a new tragedy (maybe we'll call it "theft?"), but the interesting and paradoxical nature of the original has nothing to do with this reformulation. What makes the tragedy of the commons interesting is that all agents will favor maximizing their local maxima in spite of minimizing the global maxima. A sort of "missing the forest for the trees" thing.
> The stakes couldn't be higher. If we don't develop conscious approaches to managing leverage arbitrage, we risk a future where technological capability advances exponentially while social coordination capacity deteriorates linearly—a recipe for civilizational breakdown.
This seems a tad dramatic; "leverage arbitrage" was significantly higher during colonial or industrial times and society didn't exactly collapse. I agree with the sentiment, but not sure about all the "боже мой."
> helping people recognize the type of power they're actually wielding and use it more consciously
lol, lmao even. I don't even think this would work if there wasn't capitalism in play.
> Most importantly, we need to redesign how we measure success and allocate resources.
Oh, they Are suggesting we dismantle capitalism.
Even if we were to dismantle capitalism in America, TikTok is a Chinese business/product. So this has to be a global solution. While we're wielding our lever that can move the globe let's fix global warming, war, hunger, etc while we're at it. No point in half measures.
Does feel good to read though, nice and fluffy.
Remember that for the vent majority of human history, most people have not felt like they were truly in control of their lives. If it was not God, it was the king. Remember that nearly every animal except humans dies by being eaten. We and our pets are nearly the only creatures that die in their sleep. To suggest that the world is supposed to “work” in a way that leaves us feeling un-anxious is a modern sense of entitlement. For most animals, and most humans the only thing you were entitled to was to scrape by desperately until you finally, eventually, failed and died.
Our institutions were more stable before because people were more dedicated to maintaining them. They were created because overt pain or death was the alternative. But then they began to protect us from discomfort. Then from fear of discomfort. And for discomfort that exists in the mind. This was bound to come apart around the time some people asked to be afraid simply said “meh”
Will this go badly? Certainly from the perspective of whether people feel safe and content. But we have had 10,000 years of self generated strive from humans. It is our fuel.
While the reasoning I believe to be utterly littered with weak arguments, I agree with the final conclusion/question of the author.
But what the author isn't clearly saying is that he is describing a free market opposed to a planned economy, even if he doesn't specificly say that, that's the implication of the final question.
And I am all for planned economy. I want to cooperate with my human fellows, not compete to crush them. In this day and age it is obsurd not to this. The technology is there, we just need to use it the right way. I know many will do a knee-jerk reaction to this statement and start spitting out the old anti-soviet economy arguments about planned economy, I'm bracing.
If you want competition then accept the social outcomes. Wanting to equalize competition through regulation is exactly like banning war crimes. When war is raging all kinds of war crimes will be commited.
I'll disagree. But fundamentally the problem of a centrally planned economy is whether or not you agree with the plan.
I worked in the defense industry, specifically on weapons meant for export. At the time, I was considered a subject matter expert. Something that eventually got me selected to travel to a foreign nation that had purchased an update to that weapon we sold them.
This country had an active internal conflict at the time, and the soldiers I was training had used the previous iteration of that weapons system. Prior to flying out my peers had forewarned me about was that these young men loved to try to horrify us soft westerners. I was no exception of course. For me, when we were relaxing over dinner, one of the soldiers felt it was my turn. Showed me photographs of a family of corpses corpses. 5 adults. 4 children, the youngest I guess being an infant and the oldest I think was probably no older then 8 or 9 years old. A multi generational household, all civilian, all unarmed. And here he was telling me in unabashed gruesome detail about how had murdered them during a patrol, and speaking as if he had brought justice to an ancient crime. Using that older iteration of weapons I had helped create.
But I had my marching orders. No actions or words that could potentially lead to an incident. Whatever I felt got buried by black humor and a few tips on how they could do it more efficiently in the future. Later on I heard my callousness had earned modicum of respect.
Knowing what I just told you know, would you feel comfortable in your planned economy if you were ordered to join my team and work on improving that same weapon to the best of your ability? Would you say that it is the right way to use the technology we are building? Knowing who it was going to go to?
We have a planned economy, just one planned by capitalists.
It doesn't seem to matter whether an individual works for the weapons manufacturers, they're going to build them with my tax dollars and sell them to monsters whether i like it or not
We already have a planned economy in a lot of ways anyway. We just don't have much collective control over who governs those levers. Particularly when it comes to resource allocation. VC might look like a free market approach on the surface, but it gives huge power to shape the economy to unaccountable un elected randos who likely "come from money" with a lot of skeletons in their closets. Even if they don't, having that kind if power with essentially no checks is a recipe for disaster.
We don't have to copy the Soviets exactly, but it's foolish to pretend like their system didn't work. Sure they killed and imprisioned people, but we do the same on at least the same scale. Socialism at least in principle isn't specifically aiming for that outcome. Whatever we're doing here kinda does. Our country would crumble if the war machine ever stopped. Changing our approach to anything is just too expensive to consider and dissent can't be tolerated for the same reason. What options do we really have?
Surely we can do better
> to be utterly littered with weak arguments
It's just AI generated, which is a shame to end up at the top of the site.
An economy is only free if left uncontrolled. IMO emergence of "cartels" and colluding entities (including politicians), have destroyed the notion of truly free markets.
"humans"
Your complaint is actually about humans.
As long as we have humans we will have people that seek to cultivate and maintain power at all costs.
> And I am all for planned economy. In this day and age it is obsurd not to this.
"Well, did it work for those people?"
"No, it never does. I mean, these people somehow delude themselves into thinking it might, but... but it might work for us."
This classic reductionist argument is so trivial it could havr been automatically posted by an LLM.
The majority of those people say they regreted the collapse of the Soviet Union. Not sure where your fantasy facts of "did not work" come from.
Technically the market economy we live in is designed to function by containing a whole lot of smaller planned economies (businesses).
Interesting and well written, thanks for posting! That said, I think there's ample reason to file this under "writer steeped in SV-coded libertarianism unintentionally rediscovers the arguments in favor of socialism."
I get where he's coming from, but "leverage" (AKA competitive advantage in a market?) seems like a woefully inadequate framework with which to understand the overall direction of society. Regardless: What's a "systematic" growth rate...? I'm obviously a math noob, but I've never heard of that, and don't see it mentioned on wikipedia: https://en.wikipedia.org/wiki/Growth_rate_%28group_theory%29In general this seems like a good summary of the most Marxist dichotomy of them all-labor vs. capital-with "code" tacked on to the end even though it doesn't really fit. I've cut a lot of responses to the examples below, but I think it's easy to pick out the ones that don't fit once you start looking -- I leave it as an exercise for the reader ;)
Another way to say this would be[1]: Constant revolutionising of production, uninterrupted disturbance of all social conditions, everlasting uncertainty and agitation distinguish the bourgeois epoch from all earlier ones. All fixed, fast-frozen relations, with their train of ancient and venerable prejudices and opinions, are swept away, all new-formed ones become antiquated before they can ossify. All that is solid melts into air, all that is holy is profaned... This seems to be the main thesis, which I'd rephrase as: "Accumulation of wealth at one pole is, therefore, at the same time accumulation of misery, agony of toil, slavery, ignorance, brutality, mental degradation, at the opposite pole, i.e., on the side of the class that produces its own product in the form of capital."[2] And this is why I felt compelled to write a long response in the first place. The solution to "the capitalists are controlling everything and I don't like the results" is to band together as workers and exert control over the situation! The stereotype of libertarians as the kind of people to offer the destitute "financial literacy" classes instead of help is a trite one, but clearly not obsolete... Yes, I agree -- we need to greatly reduce/eliminate the power that shareholders have to prioritize YoY equity growth over the needs of society! Another way to say this would be "As capitalist, he is only capital personified. His soul is the soul of capital. But capital has one single life impulse, the tendency to create value and surplus-value, to make its constant factor, the means of production, absorb the greatest possible amount of surplus-labour. Capital is dead labour, that, vampire-like, only lives by sucking living labour, and lives the more, the more labour it sucks. "[3]Or, in the words of noted anti-Capitalist Sam Altman: "We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project."[4]
This really sums our disagreement. He thinks the solution to the problems he details is to coordinate more closely with the capitalists -- I feel confident that it's to disempower them. To put it in libertarian terms: their incentives are not a good fit for this situation![1] The Communist Manifesto: https://www.marxists.org/archive/marx/works/1848/communist-m...
[2] Das Kapital, Chapter 25: https://www.marxists.org/archive/marx/works/1867-c1/ch25.htm
[3] Das Kapital, Chapter 10: https://www.marxists.org/archive/marx/works/1867-c1/ch10.htm
[4] The OpenAI Charter, April 2018: https://web.archive.org/web/20230714043611/https://openai.co...
Thank you for the truly thoughtful comment. I’m not OP, and I have the polar opposite political/economic opinion. But I come to HN for interesting, intelligent perspectives, and you have provided that. Thank you comrade.
Thank you for writing the comment i was too lazy to write
So VCs bad for funding ideas, big tech bad for paying high salaries. I completely reject your notions.
It feels like, you completely missed the point.
[dead]
Excuse me, I can't help but cite this one wonderful song:
"When everything's made to be broken, I just want you to know who I am" (c) Iris by Goo Goo Dolls
I feel it touches on something deep that has to do with the current state of the tech world.
Or maybe it just feels that way because the world is being fixed at unprecedented rates?
For example I had the misfortune to needing help from the mental health community a decade ago. I sometimes think about how much easier that part of my life would have been if TikTok and Claude had existed back then. It’s easy to forget, but many of the institutions we’ve built over centuries are deeply dysfunctional and very much in need of disruption.
The night often seems darkest just before dawn.
(With that said, I agree there are risks of the kind you describe.)
I don't think you should be getting mental health advice from LLMs, it has been shown that their sycophantic nature reinforces your own diagnoses not that of a professionals. Using an LLM in a field you're knowledgeable in would make you think twice about their accuracy in fields you are unable to know the correctness of personally. It will all sound very convincing, but may be a fundamentally flawed diagnoses.
Another way of thinking about, a non-trivial amount of training data is internet comments and blogs, which have an alarming amount of self diagnoses, non-professional diagnoses, and totally fabricated facts about mental illness.
I agree. I was using an LLM to test some assumptions I was making about a chemical combination, and although it's reasoning was logical it made an error about how pH works when you combine two acids of similar strength then it held onto that incorrect conclusion for the rest of the chat.
Since I wasn't asking anything related to pH, I skimmed past that section and didn't notice the error until much later in the chat when the LLM decided to build upon erroneous reasoning.
I think someone who hadn't studied chemistry would've relied on its' answer since all the rest of the logic would've been correct if the solution really did become more acidic.
Agree, pointing to TikTok and LLMs as a “disrupter” to the prevailing psychotherapeutic industry is crazy town. Those media reify / amplify the exact same crazy of their training set into absurdity.
To be clear I have nothing against the prevailing psychotherapeutic industry. I’m very thankful for that part of the mental health community. I think it’s rather well-aligned with TikTok too, btw.
> I don't think you should be getting mental health advice from LLMs, it has been shown that their sycophantic nature reinforces your own diagnoses not that of a professionals.
And therapists don't? All the data used in diagnosis is self-reported feelings. All the progress made from seeing a therapist is also self-reported feelings.
Random chance might make more accurate diagnoses than any based on self-reported feelings.
To an extent, but the words you give an LLM are the entirety of what it has to go on, and again they are sycophantic. If you tell it you have depression, and you insist, it will do its best to agree. In the exact same conversation, you could later convince it you have schizophrenia instead. A human wouldn't buy it.
A trained psychologist is going to use their procedural training to diagnose you. Not text input, they are asking you questions with subtext, and you may not even realise what they learned about you from your answer. With an LLM, you are loading it's context with your world view and it will go off that.
I’m well aware of the risks. But I think you have an unrealistically rosy picture of professionals’ diagnoses.
Replacing expert opinion with engagement-baiting content from feed machines and hallucinating matrices seems to me is a part of the problem, and not the solution.
Especially using TikTok to try and improve mental health issues seems a bit like trying to fight fire with a (edit: spelling) hose of jet fuel.
Sometimes it’s exactly the right medicine. :)
I very much think you're wrong
There's certainly no shortage of issues with the mental health professional field, that much is true. I hope you are doing well regardless. I suspect we're going to see this story play out a lot, where the limitations of AI should be a major limiting factor, but people will get results anyway.
I imagine it will rely a lot on the pilot, and how well they understand those limitations. Perhaps the bigger risks are those without good understanding of LLMs who just treat it like an all knowing expert human.
If you are only capable of blind trust or distrust then there’s a huge difference between human experts and LLMs (and it’s of course wise to put your blind trust in a human, not an LLM). But if you have more of a ”trust no one” or ”trust but verify” mentality in general then it’s not so clear cut. The LLMs have their advantages. For example every chat with an LLM is an independent sample, whereas once a doctor has diagnosed you it’s very hard to get them to consider evidence contradictory to that diagnosis.
I am really beginning to think that the Gell-Mann amnesia effect has a shining example in LLM usage.
Its obvious there's no intelligence behind it when you make it try to do something with data that gets parsed by a computer.
I hope you’re not seriously comparing TikTok/LLM-s with proper mental health institutions.
Man we are cooked.
Indeed, we are.
If there is one horror I have it is one of the really mentally unwell people I knew when growing up pairing up with an LLM that feeds into everything they say. LLMs are great for truth seekers, who know how to deal with the yes-sayer-mentality of the chatbot. If there is one type of person I am 100% sure cannot deal with that it would be mental health patients.
That doesn't mean I don't think LLMs couldn't be used for therapy with great success. Just not general purpose models without a lot of tweaking.
I fully agree that the night seems dark. But having recovered from a state that many institutionalized professionals deemed hopeless I have a different perspective.
Have you seen One Flew Over the Cuckoo's Nest? I think it serves as a good model to explain why we can both be right. You’re looking at the average effect of the institution across every minor character that appears in frame and deem it positive, and it was. I’m looking at what happened to Jack Nicholson and see the institution as deeply dysfunctional. If you think about it I think you’ll agree that I’m not wrong about that.
I agree that institutionalized everything can develope blind spots that makes those institutions work broadly, but can't help (and in extreme cases hurts) individuals within said institutional context — although that doesn't really contradict my point.
Just because getting kicked into the head occasionally helps someone with a neorological defect, doesn't mean we should recommend getting kicked in the head by a horse as a remedy, if you follow my point. It means we need to figure out why it helped those people and maybe find a way that doesn't require an equiestrian.
And in the case of therapy we already have a lot of knowledge what works and what doesn't. Therapeutic psychiatrists typically can't give patients the time and attention they would require for structural reasons, so LLMs have real potential here. But that LLM needs to know how to deal with patients in different conditions. E.g. imagine a patient with a schizophenic disorder convincing a LLM into feeding into their paranoid schizophrenia. My experience with LLMs so far is, that they would happily just do this, if you're persistent enough..
I liked the blog post. I try not to "game the system" and usually get all judgemental about people who do. Okay I am old fashioned. Putting on my Gordon Gecko hat ("greed is good"), of course, it is the disparity between system reaction times that enables gaming it. What system can one develop to get inside an adversary's OODA Loop (observe, orient, decide, and act). I like the idea that it's a (meta)system problem, it's about leverage.