OSINT to avoid being deceived by society (Part1)

yentotexbof
3 min readOct 25, 2020

Introduction

How do you think about dealing with misinformation?
First, you can try tools such as trusted institutions. But be careful about which tool you choose. How much guarantee does factcheck work with the correct algorithm? After all, how enthusiastic you are. The question is whether you want to know the probable conclusion.

First step

But basically, each community is based on the basic idea of ​​trusting people who trust in each group, and they combine to create the truth.
I’m not saying that a university professor is a bot, but if this is a behavioral psychologist, is this a group choice? First, the pressure to trust a university professor is on the university side, within the university. And the claim of group selection is controversial, and it is also up to their community to believe this, and this is compared to other experiments at other universities to verify the concept of group selection in practice. It will be scientifically debated whether it is worthwhile. If it is correct, you will later realize that these structures were the mechanisms that made up the group selection. Whether the group selection is working or not. , Humans trust a close community or a community of interest to some extent and disseminate it to another person. Thus, the content, the scale of the community involved, the flow of dissemination, etc. are fake news. It will be a criterion to judge whether or not.
What we can do at least is to consider whether each individual is misinformation before sharing the information.

Types of disinformation

Manipulated content: distortion of Genuine information or image. For example, “clickbait” makes it a more sensational headline that is often popular.
(Example: https://medium.com/annie-lab/misleading-satellite-images-of-warped-three-gorges-dam-in-china-are-not-accurate-c3f3080c9772)

Content Created: Completely false content.
(Example: https://123moviesgoto.com/mimic)

Misleading content: Misleading use of information, such as presenting comments as facts. (Example: social hacking) More generally, it uses illusions and prejudices.

Incorrect context of connection: Virtually accurate content shared with incorrect context information, for example, if the article headline does not reflect the content. Recontexted media is also in this category. (Example: https://ftp.firstdraftnews.org/articulate/temp/ovcR/story_html5.html, chinatown) This is a partial information inconsistency.

Spoofing Content: Spoofing a genuine source, for example, by using an established news agency brand. Example (twitter blue checkmark)

Satire and Parody: Presents a humorous yet false store as if it were true. Not usually classified as fake news, but this can unintentionally fool the reader. (Example: https://medium.com/lifes-funny/the-flying-spaghetti-monster-religion-aka-pastafarianism-b21ab5070c22)

So what is being done to counter these?

Current excellent measures and methods
1: https://medium.com/@JordanWildon/every-piece-of-journalism-advice-ive-received-for-free-aa26afd47b35 (A great resource to look at first if you’re aiming for journalism)
2: https://www.justsecurity.org/65795/how-data-privacy-laws-can-fight-fake-news/ (about GDPR)

However, the technology to imitate humans is expected to improve year by year.
Take a look at bad-bot-report-2020.
What is written here indicates that the total number of bots has reached a record high.
You can find many other examples of technological improvements online.
It is imperative that you always have the latest information about disinformation.
In Part 2, I would like to think about the efforts of each institution in more detail.

--

--

yentotexbof

information science & technology. threat hunting. Researcher of Internet Assigned Numbers Authority