• @seaQueue@lemmy.worldOP
      link
      fedilink
      2151 month ago

      The best I can do is an ML model running on an NPU that parses JSON in subtly wrong and impossible to debug ways

      • @Aceticon@lemmy.world
        link
        fedilink
        561 month ago

        Just make it a LJM (Large JSON Model) capable of predicting the next JSON token from the previous JSON tokens and you would have massive savings in file storagre and network traffic from not having to store and transmit full JSON documents all in exchange for an “acceptable” error rate.

      • @AeroLemming@lemm.ee
        link
        fedilink
        English
        541 month ago

        You need to make sure to remove excess whitespace from the JSON to speed up parsing. Have an AI read the JSON as plaintext and convert it to a handwriting-style image, then another one to use OCR to convert it back to text. Trailing whitespace will be removed.

        • @knorke3@lemm.ee
          link
          fedilink
          41 month ago

          Did you know? By indiscriminately removing every 3rd letter, you can ethically decrease input size by up to 33%!

      • Terrasque
        link
        71 month ago

        So you’re saying it’s already feature complete with most json libraries out there?