adidoit an hour ago

Not sure if it's coincidental that OpenAI's open weights release got delayed right after an ostensibly excellent open weights model (Kimi K2) got released today.

https://moonshotai.github.io/Kimi-K2/

OpenAI know they need to raise the bar with their release. It can't be a middle-of-the-pack open weights model.

  • lossolo an hour ago

    This could be it, especially since they announced last week that it would be the best open-source model.

ryao 2 hours ago

Am I the only one who thinks mention of “safety tests” for LLMs is a marketing scheme? Cars, planes and elevators have safety tests. LLMs don’t. Nobody is going to die if a LLM gives an output that its creators do not like, yet when they say “safety tests”, they mean that they are checking to what extent the LLM will say things they do not like.

  • natrius 2 hours ago

    An LLM can trivially instruct someone to take medications with adverse interactions, steer a mental health crisis toward suicide, or make a compelling case that a particular ethnic group is the cause of your society's biggest problem so they should be eliminated. Words can't kill people, but words can definitely lead to deaths.

    That's not even considering tool use!

    • ryao 2 hours ago

      This is analogous to saying a computer can be used to do bad things if it is loaded with the right software. Coincidentally, people do load computers with the right software to do bad things, yet people are overwhelmingly opposed to measures that would stifle such things.

      If you hook up a chat bot to a chat interface, or add tool use, it is probable that it will eventually output something that it should not and that output will cause a problem. Preventing that is an unsolved problem, just as preventing people from abusing computers is an unsolved problem.

      • pesfandiar an hour ago

        The society has accepted that computers bring more benefit than harm, but LLMs could still get pushback due to bad PR.

      • ronsor 2 hours ago

        As the runtime of any program approaches infinity, the probability of the program behaving in an undesired manner approaches 1.

        • ryao 2 hours ago

          That is not universally true. The yes program is a counter example:

          https://www.man7.org/linux/man-pages/man1/yes.1.html

          • cgriswald 2 hours ago

            Devil's advocate:

            (1) Execute yes (with or without arguments, whatever you desire).

            (2) Let the program run as long as you desire.

            (3) When you stop desiring the program to spit out your argument,

            (4) Stop the program.

            Between (3) and (4) some time must pass. During this time the program is behaving in an undesired way. Ergo, yes is not a counter example of the GP's claim.

            • ryao an hour ago

              I upvoted your reply for its clever (ab)use of ambiguity to say otherwise to a fairly open and shut case.

              That said, I suspect the other person was actually agreeing with me, and tried to state that software incorporating LLMs would eventually malfunction by stating that this is true for all software. The yes program was an obvious counter example. It is almost certain that all LLMs will eventually generate some output that is undesired given that it is determining the next token to output based on probabilities. I say almost only because I do not know how to prove the conjecture. There is also some ambiguity in what is a LLM, as the first L means large and nobody has made a precise definition of what is large. If you look at literature from several years ago, you will find people saying 100 million parameters is large, while some people these days will refuse to use the term LLM to describe a model of that size.

              • cgriswald an hour ago

                Thanks, it was definitely tongue-in-cheek. I agree with you on both counts.

    • bilsbie 2 hours ago

      PDFs can do this too.

    • thayne an hour ago

      Part of the problem is due to the marketing of LLMs as more capable and trustworthy than they really are.

      And the safety testing actually makes this worse, because it leads people to trust that LLMs are less likely to give dangerous advice, when they could still do so.

    • 123yawaworht456 2 hours ago

      does your CPU, your OS, your web browser come with ~~built-in censorship~~ safety filters too?

      AI 'safety' is one of the most neurotic twitter-era nanny bullshit things in existence, blatantly obviously invented to regulate small competitors out of existence.

      • no_wizard an hour ago

        It isn’t. This is dismissive without first thinking through the difference of application.

        AI safety is about proactive safety. Such an example: if an AI model could be used to screen hiring applications, making sure it doesn’t have any weighted racial biases.

        The difference here is that it’s not reactive. Reading a book with a racial bias would be the inverse; where you would be reacting to that information.

        That’s the basis of proper AI safety in a nutshell

        • ryao an hour ago

          As someone who has reviewed people’s résumés that they submitted with job applications in the past, I find it difficult to imagine this. The résumés that I saw had no racial information. I suppose the names might have some correlation to such information, but anyone feeding these things into a LLM for evaluation would likely censor the name to avoid bias. I do not see an opportunity for proactive safety in the LLM design here. It is not even clear that they even are evaluating whether there is bias in such a scenario when someone did not properly sanitize inputs.

          • thayne an hour ago

            > but anyone feeding these things into a LLM for evaluation would likely censor the name to avoid bias

            That should really be done for humans reviewing the resumes as well, but in practice that isn't done as much as it should be

  • olalonde an hour ago

    Especially since "safety" in this context often just means making sure the model doesn't say things that might offend someone or create PR headaches.

  • recursive 2 hours ago

    I also think it's marketing but kind of for the opposite reason. Basically I don't think any of the current technology can be made safe.

    • nomel an hour ago

      Yes, perfection is difficult, but it's relative. It can definitely be made much safer. Looking at the analysis of pre vs post alignment makes this obvious, including when the raw unaligned models are compared to "uncensored" models.

  • jrflowers 2 hours ago

    > Am I the only one who thinks mention of “safety tests” for LLMs is a marketing scheme?

    It is. It is also part of Sam Altman’s whole thing about being the guy capable of harnessing the theurgical magicks of his chat bot without shattering the earth. He periodically goes on Twitter or a podcast or whatever and reminds everybody that he will yet again single-handedly save mankind. Dude acts like he’s Buffy the Vampire Slayer

  • ks2048 2 hours ago

    You could be right about this being an excuse for some other reason, but lots of software has “safety tests” beyond life or death situations.

    Most companies, for better or worse (I say for better) don’t want their new chatbot to be a RoboHitler, for example.

    • ryao 2 hours ago

      It is possible to turn any open weight model into that with fine tuning. It is likely possible to do that with closed weight models, even when there is no creator provided sandbox for fine tuning them, through clever prompting and trying over and over again. It is unfortunate, but there really is no avoiding that.

      That said, I am happy to accept the term safety used in other places, but here it just seems like a marketing term. From my recollection, OpenAI had made a push to get regulation that would stifle competition by talking about these things as dangerous and needing safety. Then they backtracked somewhat when they found the proposed regulations would restrict themselves rather than just their competitors. However, they are still pushing this safety narrative that was never really appropriate. They have a term for this called alignment and what they are doing are tests to verify alignment in areas that they deem sensitive so that they have a rough idea to what extent the outputs might contain things that they do not like in those areas.

  • eviks 2 hours ago

    Why is your definition of safety so limited? Death isn't the only type of harm...

    • ryao 2 hours ago

      There are other forms of safety, but whether a digital parrot says something that people do not like is not a form of safety. They are abusing the term safety for marketing purposes.

      • eviks 2 hours ago

        You're abusing the terms by picking either the overly limited ("death") or overly expansive ("not like") definitions to fit your conclusion. Unless you reject the fact that harm can come from words/images, a parrot can parrot harmful words/images, so be unsafe.

        • jazzyjackson 29 minutes ago

          it's like complaining about bad words in the dictionary

          the bot has no agency, the bot isn't doing anything, people talk to themselves, augmenting their chain of thought with an automated process. If the automated process is acting in an undesirable manner, the human that started the process can close the tab.

          Which part of this is dangerous or harmful?

        • ryao 2 hours ago

          The maxim “sticks and stones can break my bones, but words can never hurt me” comes to mind here. That said, I think this misses the point that the LLM is not a gatekeeper to any of this.

          • jiggawatts an hour ago

            I find it particularly irritating that the models are so overly puritan that they refuse to translate subtitles because they mention violence.

          • eviks 2 hours ago

            Don't let your mind potential be limited by such primitive slogans!

mystraline 3 hours ago

To be completely and utterly fair, I trust Deepseek and Qwen (Alibaba) more than American AI companies.

American AI companies have shown they are money and compute eaters, and massively so at that. Billions later, and well, not much to show.

But Deepseek cost $5M to develop, and made multiple novel ways to train.

Oh, and their models and code are all FLOSS. The US companies are closed. Basically, the US ai companies are too busy treating each other as vultures.

  • kamranjon 3 hours ago

    Actually the majority of Google models are open source and they also were pretty fundamental in pushing a lot of the techniques in training forward - working in the AI space I’ve read quite a few of their research papers and I really appreciate what they’ve done to share their work and also release their models under licenses that allow you to use them for commercial purposes.

    • simonw 2 hours ago

      "Actually the majority of Google models are open source"

      That's not accurate. The Gemini family of models are all proprietary.

      Google's Gemma models (which are some of the best available local models) are open weights but not technically OSI-compatible open source - they come with usage restrictions: https://ai.google.dev/gemma/terms

      • kamranjon an hour ago

        You’re ignoring the T5 series of models that were incredibly influential, the T5 models and their derivatives (FLAN-T5, Long-T5, ByT5, etc) have been downloaded millions of times on huggingface and are real workhorses. There are even variants still being produced within the last year or so.

        A yea the Gemma series is incredible and while maybe not meeting the standards of OSI - I consider them to be pretty open as far as local models go. And it’s not just the standard Gemma variants, Google is releasing other incredible Gemma models that I don’t think people have really even caught wind of yet like MedGemma, of which the 4b variant has vision capability.

        I really enjoy their contributions to the open source AI community and think it’s pretty substantial.

  • Aunche 3 hours ago

    $5 million was the gpu hour cost of a single training run.

    • dumbmrblah an hour ago

      Exactly. Not to minimize Deepseeks tremendous achievement, but that $5 million was just for the training run, not the GPUs used they purchased before, and all the OpenAI API calls they likely used to assist in synthetic data generation.

  • ryao 3 hours ago

    Wasn’t that figure just the cost of the GPUs and nothing else?

    • rpdillon 3 hours ago

      Yeah, I hate that this figure keeps getting thrown around. IIRC, it's the price of 2048 H800s for 2 months at $2/hour/GPU. If you consider months to be 30 days, that's around $5.7M, which lines up. What doesn't line up is ignoring the costs of facilities, salaries, non-cloud hardware, etc. which will dominate costs, I'd expect. $100M seems like a fairer estimate, TBH. The original paper had more than a dozen authors, and DeepSeek had about 150 researchers working on R1, which supports the notion that personnel costs would likely dominate.

    • 3eb7988a1663 2 hours ago

      That is also just the final production run. How many experimental runs were performed before starting the final batch? It could be some ratio like 10 hours of research to every one hour of final training.

  • refulgentis 2 hours ago

    > Billions later, and well, not much to show.

    This is obviously false, I'm curious why you included it.

    > Oh, and their models and code are all FLOSS.

    No?

krackers 3 hours ago

Probably the results were worse than K2 model released today. No serious engineer would say it's for "safety" reasons given that ablation nullifies any safety post-training.

  • simonw 2 hours ago

    I'm expecting (and indeed hoping) that the open weights OpenAI model is a lot smaller than K2. K2 is 1 trillion parameters and almost a terabyte to download! There's no way I'm running that on my laptop.

    I think the sweet spot for local models may be around the 20B size - that's Mistral Small 3.x and some of the Gemma 3 models. They're very capable and run in less than 32GB of RAM.

    I really hope OpenAI put one out in that weight class, personally.

etaioinshrdlu 2 hours ago

It's worth remembering that the safety constraints can be successfully removed, as demonstrated by uncensored fine-tunes of Llama.

dorkdork 3 hours ago

Maybe they’re making last minute changes to compete with Grok 4?

stonogo 3 hours ago

we'll never hear about this again