CleverLikeAnOx 2 hours ago

An old timer I worked with during my first internship called these kinds of issues "the law of coincidental failures" and I took it to heart.

I try a lot of obvious things when debugging to ascertain the truth. Like, does undoing my entire change fix the bug?

  • K0balt 2 hours ago

    Yeah, good times. I just recently had one that was a really strong misdirection, ended up being 2 simultaneous other, non related things that conspired to make it look convincingly like my code was not doing what it was supposed to. I even wrote tests to see if I had found a corner-case compiler bug or some broken library code. I was half way through opening an issue on the library when the real culprit became apparent. It was actually a subtle bug in the testing setup combined with me errantly defining a hardware interface on an ancient protocol as an HREG instead of an IREG, which just so happened to work fine until it created a callback loop inside the library through some kind of stack smashing or wayward pointer. I was really starting to doubt my sanity on this one.

    • throwup238 2 hours ago

      The joys of modbus PLCs, I take it?

      • K0balt 28 minutes ago

        Ah, yes. But a roll- your own device with C++ on bare metal, so lots more fun.

        (we’ll need a few thousand of these, and the off the shelf solution is around 1k vs $1.50 for RYO )

        By the way, the RISC V espressif esp32-C3 is a really amazing device for < $1. It’s actually cheaper to go modbus-tcp over WiFi then to actually put RS485 on the board like with a MAX485 and the associated components. Also does ZIGBEE and BT, and the espressif libraries for the radio stack are pretty good.

        Color me favorably impressed with this platform.

codeulike 2 hours ago

“I am sitting with a philosopher in the garden; he says again and again 'I know that that’s a tree', pointing to a tree that is near us. Someone else arrives and hears this, and I tell him: 'This fellow isn’t insane. We are only doing philosophy.”

― Ludwig Wittgenstein

tshaddox 11 minutes ago

Gettier cases are fun, although the infinite regress argument is a much clearer way to show that JTB is a false epistemology.

hamandcheese 31 minutes ago

I believe that schrodinger's cat also applies to software bugs. Every time I go looking, I find bugs that I don't believe existed until I observed them.

lifeisstillgood 31 minutes ago

Oh this is such a better and more useful idea than some other common ones like “yak shaving” or DRY

Love it

nmaley an hour ago

Gettier cases tell us something interesting about truth and knowledge. This is that a factual claim should depict the event that was the effective cause of the claim being made. Depiction is a picturing relationship: a correspondence between the words and a possible event (eg a cow in a field). Knowledge is when the depicted event was the effective cause of the belief. Since the paper mache cow was the cause of the belief, not a real cow, our intuitions tell us this is not normal knowledge. Therefore, true statements must have both a causal and depictional relationship with something in the world. Put another way, true statements implicitly describe a part of their own causal history.

namuol 2 hours ago

I always come back to this saying:

“Debugging is the art of figuring out which of your assumptions are wrong.”

(Attribution unknown)

  • throwawayForMe2 5 minutes ago

    I always thought of what I learned in some philosophy class, that there are only two ways to generate a contradiction.

    One way is to reason from a false premise, or as I would put it, something we think is true is not true.

    The other way is to mix logical levels (“this sentence is false”).

    I don’t think I ever encountered a bug from mixing logical levels, but the false premise was a common culprit.

  • PaulDavisThe1st 2 hours ago

    As long as "your assumptions" includes "I know what I am doing", then OK.

    But most people tend not to include that in the "your assumptions" list, and frequently it is the source of the bug.

    • recursive an hour ago

      What if you never believed that in the first place?

      • PaulDavisThe1st 4 minutes ago

        Then you're good to ignore that as a possible source of the problem.

PaulDavisThe1st 2 hours ago

> true, because it doesn't make sense to "know" a falsehoood

That's a problem right there. Maybe that made sense to the Greeks, but it definitely doesn't make any sense in the 21st century. "Knowing" falsehoods is something we broadly acknowledge that we all do.

  • kragen 16 minutes ago

    No, I think many people use a definition of "know" that doesn't include "knowing" falsehoods. Possibly you and they have fundamentally beliefs about the nature of reality, or possibly you are just using different definitions for the same word.

  • n4r9 an hour ago

    Could you elaborate what you mean by that?

    • PaulDavisThe1st an hour ago

      We all carry around multiple falsehoods in our heads that we are convinced are true for a variety of reasons.

      To say that this is not "knowing" is (as another commenter noted) hair-splitting of the worst kind. In every sense it is a justified belief that happens to be false (we just do not know that yet).

      • bee_rider 15 minutes ago

        What exactly does it mean to know something then? As distinct from believing it. Just the justification, and then, I guess it doesn't have to be a very good justification if it can be wrong?

  • JadeNB an hour ago

    > That's a problem right there. Maybe that made sense to the Greeks, but it definitely doesn't make any sense in the 21st century. "Knowing" falsehoods is something we broadly acknowledge that we all do.

    I think the philosophical claim is that, when we think we know something, and the thing that we turns out to be false, what has happened isn't that we knew something false, but rather that we didn't actually know the thing in the first place. That is, not our knowledge, but our belief that we had knowledge, was mistaken.

    (Of course, one can say that we did after all know it in any conventional sense of the word, and that such a distinction is at the very best hair splitting. But philosophy is willing to split hairs however finely reason can split them ….)

    • PaulDavisThe1st an hour ago

      The problem with the hair splitting is that it requires differentiating between different brain states over time where the only difference is the content.

      On Jan 1 2024 I "know" X. Time passes. On Jan 1 2028, I "know" !X. In both cases, there is

      (a) something it is like to "know" either X or !X

      (b) discernible brain states the correspond to "knowing" either X or !X and that are distinct from "knowing" neither

      Thus, even if you don't want to call "knowing X" actually "knowing", it is in just about every sense indistinguishable from "knowing !X".

      Also, a belief that we had the knowledge that relates to X is indistinguishable from a belief that we had the knowledge that relates to !X. In both cases, we possess knowledge which may be true or false. The knowledge we have at different times alters; at all times we have a belief that we have the knowledge that justifies X or !X, and we are correct in that belief - it is only the knowledge itself that is false.

      • kragen 7 minutes ago

        Maybe the people who use "know" in the way you don't are talking about something other than brain states or qualia. There are lots of propositions like this; if I say, "I fathered Alston", that may be true or false for reasons that are independent of my brain state. Similarly with "I will get home tomorrow before sunset". It may be true or false; I can't actually tell. The same is true of the proposition "I know there are coins in the pocket of the fellow who will get the job", if by "know" we mean something other than a brain state, something we can't directly observe.

        You evidently want to use the word "know" exclusively to describe a brain state, but many people use it to mean a different thing. Those people are the ones who are having this debate. It's true that you can render this debate, like any debate, into nonsense by redefining the terms they are using, but that in itself doesn't mean that it's inherently nonsense.

        Maybe you're making the ontological claim that your beliefs about X don't actually become definitely true or false until you have a way to tell the difference? A sort of solipsistic or idealistic worldview? But you seem to reject that claim in your last sentence, saying, "it is only the knowledge itself that is false."

w10-1 2 hours ago

The "programmer's model" is their mental model of what's happening. You're senior and useful when you not only understand the system, but can diagnose based on a symptom/bug what aspect of the system is implicated.

But you're staff and above when you can understand when your programming model is broken, and how to experiment to find out what it really is. That almost always goes beyond the specified and tested behaviors (which might be incidentally correct) to how the system should behave in untested and unanticipated situations.

Not surprisingly, problems here typically stem from gaps in the programming model between developers or between departments, who have their own perspective on the elephant, and their incidence in production is an inverse function of how well people work together.

  • hibikir 2 hours ago

    You are defining valid steps in understanding of software, but attaching them to job titles is just going to lead to very deceptive perspectives. If your labeling was accurate, every organization I've ever worked at would at least triple the number of staff engineers than it does.

recursive an hour ago

Seems somehow related to "parallel" construction of evidence.

conformist an hour ago

This is very common in finance. Knowing when finance research that made right predictions with good justifications falls into the "Gettier category" or not is extremely hard.

jayd16 2 hours ago

Hmm, are there better cases that disprove JTB? Couldn't one argue that the reliance on a view that can't tell papermache from a cow is simply not a justified belief?

Is the crux of the argument that justification is an arbitrary line and ultimately insufficient?

  • aithrowawaycomm 2 hours ago

    Yes, the paper itself is much more unambiguous (and very short): https://courses.physics.illinois.edu/phys419/sp2019/Gettier....

    These are correct but contrived and unrealistic, so later examples are more plausible (e.g. being misled by a mislabelled television program from a station with a strong track record of accuracy).

    The point is not disproving justified true belief so much as showing the inadequacy of any one formal definition: at some point we have to elevate evidence to assumption and there's not a one-size-fits-all way to do that correctly. And, similarly to the software engineering problems, a common theme is the ways you can get bitten by looking at simple and seemingly true "slices" of a problem which don't see a complex whole.

    It is worth noting that Gettier himself was cynical and dismissive of this paper, claiming he only wrote it to get tenure, and he never wrote anything else on the topic. I suspect he didn't find this stuff very interesting, though it was fashionable.

  • abeppu 2 hours ago

    I like the example of seeing a clock as you walk past. It says it's 2:30. You believe that the time is 2:30. That seems like a perfectly reasonable level of justification -- you looked at a clock and read the time. If unbeknownst to you, that clock is broken and stuck at 2:30, but you also just happened to walk by and read it at 2:30, then do you "know" that it's 2:30?

    I think a case can't so much "disprove" JTB, so much as illustrate that adopting a definition of knowledge is more complex than you might naively believe.

  • dherls 2 hours ago

    I was thinking that one solution might be to specify that the "justification" also has to be a justified true belief. In this case, the justification that you see a cow isn't true, so it isn't a JTB.

    Of course that devolves rapidly into trying to find the "base case" of knowledge that are inherent

kortilla 2 hours ago

Meh, these are just examples of the inability to correctly root cause issues. There is a good lesson in here about the real cause being lack of testing (the teammate’s DOM change should have never merged) and lack of monitoring (upstream mail provider failure should have been setting off alerts a long time ago).

The changes only had adjacency to the causes and that’s super common on any system that has a few core pieces of functionality.

I think the core lesson here is that if you can’t fully explain the root cause, you haven’t found the real reason, even if it seems related.

  • DJBunnies an hour ago

    Yeah why did he rebase unreleased code?

    And the “right” RC only has to be right enough to solve the issue.

barrystaes 2 hours ago

Well this is a roundabout way of justified thinking about a belief that just happens to align with some actual facts..

  • barrystaes 2 hours ago

    On a more serious note: populist politicians seem to like making gettier claims; they cost a lot of time to refute and are free publicity. Aka the worst kind of fake news.

    • mistermann 2 hours ago

      A rather ambitious claim considering the context!

rhelz 2 hours ago

The impossibility of solving the Gettier problem meshes nicely with the recent trend to Baysianism and Pragmatism. Instead of holding out for justified true belief and "Bang-Bang" either labeling them True or False, give them degrees of belief which are most useful for prediction and control.

JohnMakin 2 hours ago

I wasn’t aware there was a term for this or that this was not common knowledge - for me I refer to them as “if I fix this, it will break EVERYTHING” cases that come up in my particular line of work frequently, and my peers generally tend to understand as well. Cause/effect in complex symptoms is of course itself complex, which is why the first thing I typically do in any environment is set up metrics and monitoring. If you have no idea what is going on at a granular level, you’ll quickly jump to bad conclusions and waste a lot of time aka $.

  • JackFr 2 hours ago

    I’ve come across (possibly written) code that upon close examination seems to only work accidentally — that there are real failures which are somehow hidden by behavior of other systems.

    The classic and oft heard “How did this ever work?”

    • QuercusMax an hour ago

      In at least a few cases I can think of, the answer was almost definitely "it actually never did work, we just didn't notice how it was broken in this case".

    • JohnMakin 2 hours ago

      I think this stuff is really funny when I find it and I have a whole list of funniest bugs like this I have found. Particularly when I get into dealing with proxies and reponse/error handling between backend systems and frontend clients - sometimes the middle layer has been silently handling errors forever, in a way no one understood, or the client code has adapted to them in a way where fixing it will break things badly - big systems naturally evolve in this way and can take a long time to ever come to a head. When it does, that’s when I try to provide consulting, lol.

  • K0balt 2 hours ago

    This is horrifying, and needs a trigger warning lol. It gave me a sense of panic to read it. It’s always bad when you get so lost in the codebase that it’s just a dark forest of hidden horrors.

    When this kind of thing tries to surface, it’s a warning that you need to 10x your understanding of the problem space you are adjacent to.

    • namuol 2 hours ago

      The surest way to get yourself into a mess like this is to assume that a sufficiently complex codebase can be deeply understood in the first place.

      By all means you can gain a lot by making things easier to understand, but only in service of shortcuts while developing or debugging. But this kind of understanding is not the foundation your application can safely stand on. You need detailed visibility into what the system is genuinely doing, and our mushy brains do a poor job of emulating any codebase, no matter how elegant.

    • JohnMakin 2 hours ago

      I guess I’ve worked in a lot of ancient legacy systems that develop over multiple decades - there’s always haunted forests and swaths of arcane or forgotten knowledge. One time I inherited a kubernetes cluster in an account no one knew how to access and when finally hacking into it discovered troves of crypto mining malware shit. It had been serving prod traffic quietly untouched for years. This kind of thing is crazy common, I find disentangling these types of projects to be fun, personally, depending on how much agency I have. But I’m not really a software developer.