• Tja@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    4 days ago

    Again, not data integrity (Error correction) but consistency (aCid). Adding two milliseconds to a half a millisecond operation is by no means cheap…

    • silasmariner@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      4 days ago

      But adding it to an 80ms operation is. If your operation is 0.5ms it’s either a read on a small table, or maybe a single write – transaction isolation wouldn’t even be relevant there. You’re right that I did mean consistency rather that integrity though, slip of the terminology, but not really worth quibbling over. The point I meant was that I like my data to make sense, a funny quirk of mine.

      • Tja@programming.dev
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        3 days ago

        If your single operations take 80ms either it’s a toy app or someone didn’t do their job (unoptimized queries, wrong technology, wrong modeling, etc).

        • silasmariner@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          3 hours ago

          Lol what an absurd take. A transaction is a sequence of operations, not a single one, so even small tables can meet that threshold with enough query logic. I guess you’re unfamiliar with medium to large datasets, but it’s not uncommon to use the aggregate functions that SQL provides in real world situations, and on large tables that can easily reasonably exceed 1s. Toy my arse. Go play with yourself

          Although this is no surprise tbh because apparently you don’t understand why transactions are even necessary. Benchmarks shmenchmarks. Whether it works is more important.

          I do not apologise for the downvote because this is smug shit only a junior would say