• altkey (he\him)@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          5
          ·
          7 months ago

          Don’t be. Although there are millions of corpses behind each WW2 joke, getting it means you are personally aware of that, and it means something. ‘Those who don’t know shit about the past struggles are to reiterate them’ and all that.

    • sp3ctr4l@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      17
      ·
      edit-2
      7 months ago

      Obligatory ‘lore dump’ on the word lollapalooza:

      That word was a common slang term in the 1930s/40s American lingo that meant… essentially a very raucous, lively party.

      Note/Rant on the meaning of this term

      The current merriam webster and dictionary.com definitions of this term meaning ‘an outstanding or exceptional or extreme thing’ are wrong, they are too broad.

      While historical usage varied, it almost always appeared as a noun describing a gathering of many people, one that was so lively or spectacular that you would be exhausted after attending it.

      When it did not appear as a noun describing a lively, possibly also ‘star-studded’ or extravagant, party, it appeared as a term for some kind of action that would cause you to be bamboozled, discombobulated… similar to ‘that was a real humdinger of a blahblah’ or ‘that blahblah was a real doozy’… which ties into the effects of having been through the ‘raucous party’ meaning of lolapalooza.

      So… in WW2, in the Pacific theatre… many US Marines were often engaged in brutal, jungle combat, often at night, and they adopted a system of basically verbal identification challenge checks if they noticed someone creeping up on their foxholes at night.

      An example of this system used in the European theatre, I believe by the 101st and 82nd airborne, was the challenge ‘Thunder!’ to which the correct response was ‘Flash!’.

      In the Pacific theatre… the Marines adopted a challenge / response system… where the correct response was ‘Lolapalooza’…

      Because native born Japanese speakers are taught a phoneme that is roughly in between and ‘r’ and an ‘l’ … and they very often struggle to say ‘Lolapalooza’ without a very noticable accent, unless they’ve also spent a good deal of time learning spoken English (or some other language with distinct ‘l’ and ‘r’ phonemes), which very few Japanese did in the 1940s.

      racist and nsfw historical example of / evidence for this

      https://www.ep.tc/howtospotajap/howto06.html

      Now, some people will say this is a total myth, others will say it is not.

      My Grandpa who served in the Pacific Theatre during WW2 told me it did happen, though he was Navy and not a Marine… but the other stories about this I’ve always heard that say it did happen, they all say it happened with the Marines.

      My Grandpa is also another source for what ‘lolapalooza’ actually means.

      • I Cast Fist@programming.dev
        link
        fedilink
        arrow-up
        4
        ·
        7 months ago

        It does make sense to use a phoneme the enemy dialect lacks as a verbal check. Makes me wonder if there were any in the Pacific Theatre that decided for “Lick” and “Lollipop”.

      • altkey (he\him)@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 months ago

        I’m still puzzled by the idea of what mess this war was if at times you had someone still not clearly identifiable, but that close you can do a sheboleth check on them, and that at any moment you or the other could be shot dead.

        Also, the current conflict of Russia vs Ukraine seems to invent ukrainian ‘паляница’ as a check, but as I had no connection to actual ukrainians and their UAF, I can’t say if that’s not entirely localized to the internet.

        • sp3ctr4l@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          4
          ·
          7 months ago

          Have you ever been to a very dense jungle or forest… at midnight?

          Ok, now, drop mortar and naval artillery shells all over it.

          For weeks, or months.

          The holes this creates are commonly used by both sides as cover and concealment.

          Also, its often raining, sometimes quite heavily, such that these holes will up with water, and you are thus soaking wet.

          Ok, now, add in pillboxes and bunkers, as well as a few spiderwebs of underground tunnel networks, many of which have concealed entrances.

          You do not have a phone. GPS does not exist.

          You might have a map, which is out of date, and you might have a compass, if you didn’t drop or break it.

          A radio is either something stationary, or is the size and weight of approximately, somewhat less than a miniature refrigerator, and one bullet or good piece of shrapnel will take it out of commission.

          Ok, now, you and all your buddies are either half starving or actually starving, beyond exhausted, getting maybe an average of 2 to 4 hours of sleep, and you, and the enemy, are covered in dirt, blood and grime.

          Also, you and everyone else may or may not have malaria, or some other fun disease, so add shit and vomit to the mix of what everyone is covered in.

          Ok! Enjoy your 2 to 8 week long camping trip from hell, in these conditions… also, kill everyone that is trying to kill you, soldier.

            • sp3ctr4l@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              3
              ·
              7 months ago

              Friendly fire incidents are still fairly common even in the modern era…

              … ask any Brits deployed to Iraq how they feel about the A-10…

              … Pat Tillman was hyped up in the media as an early Iraq War 2 US casualty who died valiantly… when the truth was he was actually killed by friendly fire from his own unit, oh and he actually thought the entire operation in Afghanistan was “fucking illegal”… because Congress is supposed to declare war, not the President…

              Even in the RussoUkranian war, right now, in the past few years, there have been tons of incidents of Russians accidentally shooting their own at fairly close range, due to poor coordination, and I’m sure its happened with the Ukranians as well… and thats to say nothing of accidentally drone or arty striking a friendly infantry squad or tank or IFV or what not.

              Just go play any modern semi-realistic war game (Squad, Arma 3/Reforger, etc) that doesn’t have a pop up HUD with blue for friend and red for foe, and has friendly fire enabled, and you should be able to see that friendly fire happens all the time with noobs.

              As for fragging… that term, as it originated in Vietnam, specifically refferred to tossing a fragmentation grenade into an area (often their bunk) where an officer or NCO was.

              It was a form of mutiny, essentially, against officers that kept sending men into meat-grinders…

              …chewing them out for not maintaining their early M16s which were unreliable as fuck due to being rammed through the production pipeline by McNamara, shoddy quality control from Colt, and everyone just pretending swapping to a new kind of powder in the rounds wouldn’t blow past the designed tolerances of the weapon…

              … or just, you know, fuck being drafted into this bullshit war.

              In the modern day, ‘frag’ is mostly a gamer term that basically just means ‘killed a guy’, and the origin of that term has been obscured, forgotten.

  • RedstoneValley@sh.itjust.works
    link
    fedilink
    arrow-up
    133
    arrow-down
    6
    ·
    7 months ago

    It’s funny how people always quickly point out that an LLM wasn’t made for this, and then continue to shill it for use cases it wasn’t made for either (The “intelligence” part of AI, for starters)

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      46
      arrow-down
      3
      ·
      7 months ago

      LLM wasn’t made for this

      There’s a thought experiment that challenges the concept of cognition, called The Chinese Room. What it essentially postulates is a conversation between two people, one of whom is speaking Chinese and getting responses in Chinese. And the first speaker wonders “Does my conversation partner really understand what I’m saying or am I just getting elaborate stock answers from a big library of pre-defined replies?”

      The LLM is literally a Chinese Room. And one way we can know this is through these interactions. The machine isn’t analyzing the fundamental meaning of what I’m saying, it is simply mapping the words I’ve input onto a big catalog of responses and giving me a standard output. In this case, the problem the machine is running into is a legacy meme about people miscounting the number of "r"s in the word Strawberry. So “2” is the stock response it knows via the meme reference, even though a much simpler and dumber machine that was designed to handle this basic input question could have come up with the answer faster and more accurately.

      When you hear people complain about how the LLM “wasn’t made for this”, what they’re really complaining about is their own shitty methodology. They build a glorified card catalog. A device that can only take inputs, feed them through a massive library of responses, and sift out the highest probability answer without actually knowing what the inputs or outputs signify cognitively.

      Even if you want to argue that having a natural language search engine is useful (damn, wish we had a tool that did exactly this back in August of 1996, amirite?), the implementation of the current iteration of these tools is dogshit because the developers did a dogshit job of sanitizing and rationalizing their library of data. Also, incidentally, why Deepseek was running laps around OpenAI and Gemini as of last year.

      Imagine asking a librarian “What was happening in Los Angeles in the Summer of 1989?” and that person fetching you back a stack of history textbooks, a stack of Sci-Fi screenplays, a stack of regional newspapers, and a stack of Iron-Man comic books all given equal weight? Imagine hearing the plot of the Terminator and Escape from LA intercut with local elections and the Loma Prieta earthquake.

      That’s modern LLMs in a nutshell.

      • shalafi@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        7 months ago

        You might just love Blind Sight. Here, they’re trying to decide if an alien life form is sentient or a Chinese Room:

        “Tell me more about your cousins,” Rorschach sent.

        “Our cousins lie about the family tree,” Sascha replied, “with nieces and nephews and Neandertals. We do not like annoying cousins.”

        “We’d like to know about this tree.”

        Sascha muted the channel and gave us a look that said Could it be any more obvious? “It couldn’t have parsed that. There were three linguistic ambiguities in there. It just ignored them.”

        “Well, it asked for clarification,” Bates pointed out.

        “It asked a follow-up question. Different thing entirely.”

        Bates was still out of the loop. Szpindel was starting to get it, though… .

        • CitizenKong@lemmy.world
          link
          fedilink
          arrow-up
          6
          ·
          7 months ago

          Blindsight is such a great novel. It has not one, not two but three great sci-fi concepts rolled into one book.

          One is artificial intelligence (the ship’s captain is an AI), the second is alien life so vastly different it appears incomprehensible to human minds. And last but not least, and the most wild, vampires as a evolutionary branch of humanity that died out and has been recreated in the future.

          • TommySalami@lemmy.world
            link
            fedilink
            arrow-up
            4
            ·
            7 months ago

            My a favorite part of the vampire thing is how they died out. Turns out vampires start seizing when trying to visually process 90° angles, and humans love building shit like that (not to mention a cross is littered with them). It’s so mundane an extinction I’d almost believe it.

          • outhouseperilous@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            4
            ·
            edit-2
            7 months ago

            Also, the extremely post-cyberpunk posthumans, and each member of the crew is a different extremely capable kind of fucked up model of what we might become, with the protagonist personifying the genre of horror that it is, while still being occasionally hilarious.

            Despite being fundamentally a cosmic horror novel, and relentlessly math-in-the-back-of-the-book hard scifi it does what all the best cyberpunk does and shamelessly flirts with the supernatural at every opportunity. The sequel doubles down on this, and while not quite as good overall (still exceptionally good, but harder to follow) each of the characters explores a novel and sweet+sad+horrifying kind of love.

            • CitizenKong@lemmy.world
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              7 months ago

              Oooh, I didn’t even know it had a sequel!

              I wouldn’t say it flirts with the supernatural as much as it’s with one foot into weird fiction, which is where cosmic horror comes from.

              • outhouseperilous@lemmy.dbzer0.com
                link
                fedilink
                arrow-up
                1
                ·
                6 months ago

                Characters in the sequel include a hive-mind of post-science innovation monks, a straight up witch who charges their monastery at the head of a zombie army, and a plotline about finding what the monks think might be god. And that first scene, which is absolute fire btw.

                Primary themes include… Well the bit of exposition about needing to ‘crawl off one mountain and cross a valley to reach higher peaks of understanding’, and coping as a mostly baseline human surrounded by superintelligences, ‘sufficiently advanced technology’, etc.

      • frostysauce@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        7 months ago

        (damn, wish we had a tool that did exactly this back in August of 1996, amirite?)

        Wait, what was going on in August of '96?

      • RedstoneValley@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        ·
        7 months ago

        That’s a very long answer to my snarky little comment :) I appreciate it though. Personally, I find LLMs interesting and I’ve spent quite a while playing with them. But after all they are like you described, an interconnected catalogue of random stuff, with some hallucinations to fill the gaps. They are NOT a reliable source of information or general knowledge or even safe to use as an “assistant”. The marketing of LLMs as being fit for such purposes is the problem. Humans tend to turn off their brains and to blindly trust technology, and the tech companies are encouraging them to do so by making false promises.

      • outhouseperilous@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        7 months ago

        Yes but have you considered that it agreed with me so now i need to defend it to the death against you horrible apes, no matter the allegation or terrain?

      • Knock_Knock_Lemmy_In@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        7 months ago

        a much simpler and dumber machine that was designed to handle this basic input question could have come up with the answer faster and more accurately

        The human approach could be to write a (python) program to count the number of characters precisely.

        When people refer to agents, is this what they are supposed to be doing? Is it done in a generic fashion or will it fall over with complexity?

        • outhouseperilous@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          7 months ago

          No, this isn’t what ‘agents’ do, ‘agents’ just interact with other programs. So like move your mouse around to buy stuff, using the same methods as everything else.

          Its like a fancy diversely useful diversely catastrophic hallucination prone API.

          • Knock_Knock_Lemmy_In@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            7 months ago

            ‘agents’ just interact with other programs.

            If that other program is, say, a python terminal then can’t LLMs be trained to use agents to solve problems outside their area of expertise?

            I just tested chatgpt to write a python program to return the frequency of letters in a string, then asked it for the number of L’s in the longest placename in Europe.

            ‘’‘’

            String to analyze

            text = “Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch”

            Convert to lowercase to count both ‘L’ and ‘l’ as the same

            text = text.lower()

            Dictionary to store character frequencies

            frequency = {}

            Count characters

            for char in text: if char in frequency: frequency[char] += 1 else: frequency[char] = 1

            Show the number of 'l’s

            print(“Number of 'l’s:”, frequency.get(‘l’, 0))

            ‘’’

            I was impressed until

            Output

            Number of 'l’s: 16

        • UnderpantsWeevil@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 months ago

          When people refer to agents, is this what they are supposed to be doing?

          That’s not how LLMs operate, no. They aggregate raw text and sift for popular answers to common queries.

          ChatGPT is one step removed from posting your question to Quora.

          • Knock_Knock_Lemmy_In@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            7 months ago

            But an LLM as a node in a framework that can call a python library should be able to count the number of Rs in strawberry.

            It doesn’t scale to AGI but it does reduce hallucinations.

            • UnderpantsWeevil@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              7 months ago

              But an LLM as a node in a framework that can call a python library

              Isn’t how these systems are configured. They’re just not that sophisticated.

              So much of what Sam Alton is doing is brute force, which is why he thinks he needs a $1T investment in new power to build his next iteration model.

              Deepseek gets at the edges of this through their partitioned model. But you’re still asking a lot for a machine to intuit whether a query can be solved with some exigent python query the system has yet to identify.

              It doesn’t scale to AGI but it does reduce hallucinations

              It has to scale to AGI, because a central premise of AGI is a system that can improve itself.

              It just doesn’t match the OpenAI development model, which is to scrape and sort data hoping the Internet already has the solution to every problem.

              • KeenFlame@feddit.nu
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                7 months ago

                The only thing worse than the ai shills are the tech bro mansplainaitions of how “ai works” when they are utterly uninformed of the actual science. Please stop making educated guesses for others and typing them out in a teacher’s voice. It’s extremely aggravating

      • merc@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        7 months ago

        Imagine asking a librarian “What was happening in Los Angeles in the Summer of 1989?” and that person fetching you … That’s modern LLMs in a nutshell.

        I agree, but I think you’re still being too generous to LLMs. A librarian who fetched all those things would at least understand the question. An LLM is just trying to generate words that might logically follow the words you used.

        IMO, one of the key ideas with the Chinese Room is that there’s an assumption that the computer / book in the Chinese Room experiment has infinite capacity in some way. So, no matter what symbols are passed to it, it can come up with an appropriate response. But, obviously, while LLMs are incredibly huge, they can never be infinite. As a result, they can often be “fooled” when they’re given input that semantically similar to a meme, joke or logic puzzle. The vast majority of the training data that matches the input is the meme, or joke, or logic puzzle. LLMs can’t reason so they can’t distinguish between “this is just a rephrasing of that meme” and “this is similar to that meme but distinct in an important way”.

      • Leet@lemmy.zip
        link
        fedilink
        arrow-up
        1
        ·
        7 months ago

        Can we say for certain that human brains aren’t sophisticated Chinese rooms…

    • BarrelAgedBoredom@lemm.ee
      link
      fedilink
      arrow-up
      25
      ·
      7 months ago

      It’s marketed like its AGI, so we should treat it like AGI to show that it isn’t AGI. Lots of people buy the bullshit

    • REDACTED
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      10
      ·
      7 months ago

      There are different types of Artificial intelligences. Counter-Strike 1.6 bots, by definition, were AI. They even used deep learning to figure out new maps.

      • ouRKaoS@lemmy.today
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        7 months ago

        If you want an even older example, the ghosts in Pac-Man could be considered AI as well.

        • SoftestSapphic@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          7 months ago

          By this logic any solid state machine is AI.

          These words used to mean things before marketing teams started calling everything they want to sell “AI”

          • SparroHawc@lemmy.zip
            link
            fedilink
            arrow-up
            5
            arrow-down
            2
            ·
            7 months ago

            No. Artificial Intelligence has to be imitating intelligent behavior - such as the ghosts imitating how, ostensibly, a ghost trapped in a maze and hungry for yellow circular flesh would behave, and how CS1.6 bots imitate the behavior of intelligent players. They artificially reproduce intelligent behavior.

            Which means LLMs are very much AI. They are not, however, AGI.

            • SoftestSapphic@lemmy.world
              link
              fedilink
              arrow-up
              4
              arrow-down
              1
              ·
              7 months ago

              No, the logic for a Pac Man ghost is a solid state machine

              Stupid people attributing intelligence to something that is probably not is a shameful hill to die on.

              Your god is just an autocomplete bot that you refuse to learn about outside the hype bubble

              • SparroHawc@lemmy.zip
                link
                fedilink
                arrow-up
                2
                arrow-down
                1
                ·
                7 months ago

                Okay, what is your definition of AI then, if nothing burned onto silicon can count?

                If LLMs aren’t AI, then absolutely nothing up to this point probably counts either.

                • SoftestSapphic@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  7 months ago

                  since nothing burned into silicon can count

                  Oh noo you called me a robot racist. Lol fuck off dude you know that’s not what I’m saying

                  The problem with supporters of AI is they learned everything they know from the companies trying to sell it to them. Like a 50s mom excited about her magic tupperware.

                  AI implies intelligence

                  To me that means an autonomous being that understands what it is.

                  First of all these programs aren’t autonomous, they need to be seeded by us. We send a prompt or question, even when left alone to its own devices it doesn’t do anything until it is given an objective or reward by us.

                  Looking up the most common answer isn’t intelligence, there is no understanding of cause and effect going on inside the algorithm, just regurgitating the dataset

                  These models do not reason, though some do a very good job of trying to convince us.

              • outhouseperilous@lemmy.dbzer0.com
                link
                fedilink
                arrow-up
                1
                ·
                7 months ago

                Okay but if i say something from outside the hype bubble then all my friends except chatgpt will go away.

                Also chatgpt is my friend and always will be, and it even told me i don’t have to take the psych meds that give me tummy aches!

              • howrar@lemmy.ca
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                7 months ago

                As far as I’m concerned, “intelligence” in the context of AI basically just means the ability to do things that we consider to be difficult. It’s both very hand-wavy and a constantly moving goalpost. So a hypothetical pacman ghost is intelligent before we’ve figured out how to do it. After it’s been figured out and implemented, it ceases to be intelligent but we continue to call it intelligent for historical reasons.

          • outhouseperilous@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            7 months ago

            Yes but then we built a weapon with with to murder truth, and with it meaning, so everything is just vibesy meaning-mush now. And you’re a big dumb meanie for hating the thing that saved ys from having/being able to know things. Meanie.

    • SoftestSapphic@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      7 months ago

      Maybe they should call it what it is

      Machine Learning algorithms from 1990 repackaged and sold to us by marketing teams.

      • outhouseperilous@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        4
        ·
        7 months ago

        Hey now, that’s unfair and queerphobic.

        These models are from 1950, with juiced up data sets. Alan turing personally sid a lot of work on them, before he cracked the math and figured out they were shit and would always be shit.

    • merc@sh.itjust.works
      link
      fedilink
      arrow-up
      11
      arrow-down
      1
      ·
      7 months ago

      then continue to shill it for use cases it wasn’t made for either

      The only thing it was made for is “spicy autocomplete”.

    • Gladaed@feddit.org
      link
      fedilink
      arrow-up
      5
      arrow-down
      11
      ·
      7 months ago

      Fair point, but a big part of “intelligence” tasks are memorization.

      • BussyCat@lemmy.world
        link
        fedilink
        arrow-up
        9
        ·
        7 months ago

        Computers for all intents are purposes have perfect recall so since it was trained on a large data set it would have much better intelligence. But in reality what we consider intelligence is extrapolating from existing knowledge which is what “AI” has shown to be pretty shit at

        • Gladaed@feddit.org
          link
          fedilink
          arrow-up
          2
          arrow-down
          2
          ·
          7 months ago

          They don’t. They can save information on drives, but searching is expensive and fuzzy search is a mystery.

          Just because you can save a mp3 without losing data does not mean you can save the entire Internet in 400gb and search within an instant.

          • BussyCat@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            7 months ago

            Which is why it doesn’t search within an instant and it uses a bunch of energy and needs to rely on evaporative cooling to stop overheating the servers

  • qx128@lemmy.world
    link
    fedilink
    arrow-up
    30
    arrow-down
    2
    ·
    7 months ago

    I really like checking these myself to make sure it’s true. I WAS NOT DISAPPOINTED!

    (Total Rs is 8. But the LOGIC ChatGPT pulls out is ……. remarkable!)

  • Korhaka@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    14
    ·
    7 months ago

    I asked it how many Ts are in names of presidents since 2000. It said 4 and stated that “Obama” contains 1 T.

  • MrLLM@ani.social
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    3
    ·
    7 months ago

    We gotta raise the bar, so they keep struggling to make it “better”

    My attempt
    0000000000000000
    0000011111000000
    0000111111111000
    0000111111100000
    0001111111111000
    0001111111111100
    0001111111111000
    0000011111110000
    0000111111000000
    0001111111100000
    0001111111100000
    0001111111100000
    0001111111100000
    0000111111000000
    0000011110000000
    0000011110000000
    

    Btw, I refuse to give my money to AI bros, so I don’t have the “latest and greatest”

    • ipitco@lemmy.super.ynh.fr
      link
      fedilink
      arrow-up
      25
      arrow-down
      1
      ·
      7 months ago

      Tested on ChatGPT o4-mini-high

      It sent me this

      0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0
      0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0
      0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0
      0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0
      0 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0
      0 0 1 1 1 1 1 1 1 1 1 0 0 0 0 0
      0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0
      0 0 1 1 1 1 1 1 0 0 0 0 0 0 0 0
      0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0
      1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0
      1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0
      1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0
      1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0
      0 0 1 1 1 0 0 1 1 1 0 0 0 0 0 0
      0 1 1 1 0 0 0 0 1 1 1 0 0 0 0 0
      1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0
      

      I asked it to remove the spaces

      
      0001111100000000
      0011111111000000
      0011111110000000
      0111111111100000
      0111111111110000
      0011111111100000
      0001111111000000
      0011111100000000
      0111111111100000
      1111111111110000
      1111111111110000
      1111111111110000
      1111111111110000
      0011100111000000
      0111000011100000
      1111000011110000
      

      I guess I just murdered a bunch of trees and killed a random dude with the water it used, but it looks good

      • xavier666@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        I just murdered a bunch of trees and killed a random dude with the water it used, but it looks good

        Tech bros: “Worth it!”

        • ipitco@lemmy.super.ynh.fr
          link
          fedilink
          arrow-up
          1
          ·
          6 months ago

          It’s a pretty big problem, but as long as governments don’t do shit then we’re pretty much fucked.

          Either we take the train and contribute to the problem, or we don’t but get left behind, and end up being the harmed one.

  • Echo5@lemmy.world
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    7 months ago

    Maybe OP was low on the priority list for computing power? Idk how this stuff works