• US occupying forces in northern Syria are continuing to plunder natural resources and farmland, a practice ongoing since 2011
  • Recently, US troops smuggled dozens of tanker trucks loaded with Syrian crude oil to their bases in Iraq.
  • The fuel and convoys of Syrian wheat were transported through the illegal settlement of Mahmoudia.
  • Witnesses report a caravan of 69 tankers loaded with oil and 45 with wheat stolen from silos in Yarubieh city.
  • Similar acts of looting occurred on the 19th of the month in the city of Hasakeh, where 45 tankers of Syrian oil were taken out by US forces.
  • Prior to the war and US invasion, Syria produced over 380 thousand barrels of crude oil per day, but this has drastically reduced to only 15 thousand barrels per day.
  • The country’s oil production now covers only five percent of its needs, with the remaining 95 percent imported amidst difficulties due to the US blockade.
  • The US and EU blockade prevents the entry of medicines, food, supplies, and impedes technological and industrial development in Syria.
  • @zephyreks@lemmy.mlM
    link
    fedilink
    82 months ago

    “[MBFC’s] subjective assessments leave room for human biases, or even simple inconsistencies, to creep in. Compared to Gentzkow and Shapiro, the five to 20 stories typically judged on these sites represent but a drop of mainstream news outlets’ production.” - Columbia Journalism Review

    “Media Bias/Fact Check is a widely cited source for news stories and even studies about misinformation, despite the fact that its method is in no way scientific.” - PolitiFact journalists

    MBFC is used when analyzing a large swathe of data because they have ratings for basically every news outlet. There, if a quarter or a third of the data is wrong, you can still generate enough signal to separate from noise.

    It absolutely matters who is running a site because there’s an inherent accountability for journalism. There’s a reason you don’t see NYT articles from “Anonymous Ostrich.”

    • @nahuse@sh.itjust.works
      link
      fedilink
      -32 months ago

      I accept your point about why it matters who runs the site. I would just argue that in this case, it’s not as relevant because the goal seems to legitimately be information transparency, which is consistently delivered across its work. Its findings are at least generally reproducible. But no it’s not scientific. I believe I’ve stated that already, however it’s a good indication of reliability of a source.

      Yes, human bias creeps in, hence my point of using it alongside general media literacy and critical thinking when evaluating media.

      It aggregates and analyzes a ton of sources, and gives generally accurate information about how they are funded, where they are based, and how well the cite original sources. These are all things that can be corroborated by a somewhat systematic reading of the sources themselves.

      • @zephyreks@lemmy.mlM
        link
        fedilink
        72 months ago

        An LLM also “aggregates and analyzes a ton of sources, and gives generally accurate information about how they are funded, where they are based, and how well the cite original sources.”

        That doesn’t make an LLM a useful source.

          • @zephyreks@lemmy.mlM
            link
            fedilink
            62 months ago

            We don’t allow LLM-generated summaries as news stories. Do the legwork, use these tools to start if you want to, but don’t cite them as though they are gospel.

            • @nahuse@sh.itjust.works
              link
              fedilink
              -12 months ago

              What are you talking about? LLMs have no bearing in this conversation, you brought them up.

              Are you saying that you don’t allow people to use tools to evaluate media; and share their reasons for scepticism?

              The bit that I quoted from MBFC is factual information (the story’s sponsors and an assessment of reliability), which I used to begin a conversation about the source.

              Which upon further discussion was, indeed, ultimately sourced to a Syrian governmental agency, which is then been repeated by various governmental sources. There has not yet been any evidence to support the allegations made by the original source, which supports MBFC assertion that the original news agency does not often provide reliable (by journalistic standards) justification for its news stories. It seems like a really weird idea for you to so vehemently oppose a resource that enables critical thinking.

              The news article is an extension of at least one state agency, and there are critiques of its truthfulness. That’s the takeaway from my original comment.

              I feel like I’m repeating myself, but I literally cannot fathom a good faith justification for not allowing a widely accepted tool for media literacy to be allowed here. (For clarity, I’m talking about MBFC, not any LLM stuff, which only serves to obfuscates things.)

              This is all true, and comes

              • @zephyreks@lemmy.mlM
                link
                fedilink
                62 months ago

                I cannot fathom a good faith justification for allowing a resource that intentionally obfuscates the media landscape in an effort to compress the entire landscape onto a 2D plane from a person who cannot be found through any conventional means and very well may not exist. Their methodology is bunk for a number of reasons, but we’ll focus specifically on how they evaluate factuality.

                1. As you know, op-eds typically fall under different journalistic purview than news stories. This is as true for the NYT and SCMP (newspapers of record) as it is for Breitbart. Mixing the factuality rating for op-eds and news stories is rather questionable.

                2. The rating scheme works by sampling (how? nobody knows) a small number of stories from each paper and evaluating their factuality. This destroys the validity of the data, as different news sources cover different stories and categories of stories vary in factuality. For example, a paper which records the daily weather temperature in Toronto would be “very highly accurate” even if they release a story saying that water is dry and trees are fake once a month. Because of the limitations of sampling, their methodology leads to inherently skewed results.

                3. The definition of propaganda used is… Unclear. This is obvious as statements made by the US government and repeated by other news agencies are not considered propaganda, despite their factual inaccuracy. For example, “40 beheaded babies” (later demonstrated to be false) and “we [the United States] have the most sophisticated semiconductors in the world” (literally, provably, false because TSMC’s Taiwan fabs are the clear and undisputed leader).

                4. They fail to do due diligence on sourcing because of a (I assume) lack of experience. For example, in their critique of their article “Fake data - the disease afflicting China’s vaccine system,” they say that the article is poorly sourced because it lacks hyperlinks. The article in question cites: a Hong Kong microbiologist (by name), a professor at the University of Hong Kong (by name), the WHO, stories published in the China Economic Times, data from the State Drug Administration, a law case against Changsheng Biotech, and an unnamed head of a disease control center in China. This, they claim, is a use of “quotes or sources to themselves rather than providing hyperlinks.” Their evaluation of “sourcing” seems to be dependent almost entirely on the usage of hyperlinks.

                5. They fail to consistently apply standards applied to smaller news outlets (such as Al Jazeera) to larger news outlets (such as the New York Times and CNN). Against Al Jazeera, they claim that wordplay is used that is negative towards Israel. However, as covered by The Intercept and The Guardian, the New York Times and others have just as extreme (if not more extreme) policies surrounding wordplay that is used to show Israel in a positive light. In major newspapers, for example, the words “slaughter,” “massacre,” and “horrific” are reserved almost exclusively for Israeli deaths rather than Palestinian deaths.

                6. MBFC is not consistent with the sources of their fact checks. Against Al Jazeera, they point to “The forgotten massacre that ignited the Kashmir dispute” as not crediting the image correctly. In fact, the caption describes exactly what the image shows, which is exactly what the original source for the image (which they cite) claims.

                7. I can go on…

                Again, if it’s trivial to do the legwork and discredit a source anyway, then do that. If it’s not, then don’t outsource the work just because you don’t understand it.

                • @nahuse@sh.itjust.works
                  link
                  fedilink
                  -32 months ago

                  We can talk about how it assesses factuality, but it’s not really relevant to my particular use of MBFC, since I quoted how the media of the OP is funded, which is incredibly relevant.

                  The existence of op-eds and their content is a useful indicator of where a particular media entity sits. Their editorial standards also reflect the kind of language a source routinely allowed. It’s a good indication of what the outlet is willing to publish.

                  What is your critique with how it states it samples? It’s a sample of a media source for a qualitative and subjective assessment. I, too would like to know more about how it samples, but I can also see the framework that it follows to assess factuality and confirm or dispute it through a quick look at the headlines and by skimming through some stories, if it seems warranted (though I admit, when it comes to sensationalized headlines and incendiary language, or an obvious government agenda I won’t necessarily do all my due diligence to assess a media source… like I did with the OP).

                  As for your specific concerns about factuality, you chose some random articles and engaged with them specifically but didn’t link them here, so I’m not going to do your job and go and find the thing you’re talking about.

                  To your last comment: it’s not always trivial to do the legwork. There is a lot of media out there, and it’s just getting more and more overwhelming. MBFC is just a tool. You have to be aware of the dangers when using a tool. Your critiques are all somewhat valid, but you’re advocating for throwing out a useful tool for media literacy because it’s not perfect.

                  • @zephyreks@lemmy.mlM
                    link
                    fedilink
                    32 months ago

                    It’s entirely relevant. If a source is bad as a whole, the foundation of trust you evidently have for it is built on sand.