• @bane_killgrind@lemmy.ml
    link
    fedilink
    English
    19 months ago

    So I’m not sure it’s helping you.

    You would refrain from doing the work of organizing the concept in your head into a clearly communicable explanation of the concept.

    • I think of it as in another anology.

      Compare a screwdriver with a power tool.

      Does the convenient solution hinder you from building your house simply because you cant “feel” the strength of the wood while turning the screw in?

      i doubt.

      The things you mentioned are coming into play when people think of AI as a god mode. As a user you are solely responsible for how to use a tool. If the user overestimates the power of the tool or use it for the wrong things. Its the users fault.

      The scientist is still a scientist. Which is the author of the paper. Not gpt because it writes filler text or puts the scientists thoughts into sentences.

      The context is still at the scientists plate. If the scientist does a poorly job at reviewing the gpts output. Gpt cant be faulted.

      • @bane_killgrind@lemmy.ml
        link
        fedilink
        English
        19 months ago

        A research paper is not bulk work like a house. It’s more like a watch, and a watchmaker using a screw gun is daft.

        • Thats another point. Fair enough.

          But still I dont think that science will stall just because of chatgpt.

          Journalism? Will for sure. But scientific publications have a systemic problem (like publisher-polism, pubscores etc) And outsourcing writing work to chargpt is - in my opinion - non of them.

          • @bane_killgrind@lemmy.ml
            link
            fedilink
            English
            19 months ago

            Sure, but using bad tools to do things is going to get you worse results that using the right tools. If we define worse as “less volume” then sure GPT is fine.