Methodology
This multimodal research project examines how social media influencers and political activists manipulate public opinion in the United States. To answer this question, I used a mixed-method qualitative approach that included academic literature review, case study analysis, cross-platform content observation, and multimodal media extraction. My goal was to understand not only what misinformation looks like but also how it circulates, why it spreads, and how audiences interpret it.
Sources and Academic Framework
The foundation of this project is built on peer-reviewed scholarly articles, government reports, and analytical studies on misinformation, political communication, and social media influence.
Several key studies guided the academic frame:
- Ku et al. (2026) provided insight into why users struggle to critically evaluate misinformation and how “boost and nudge” interventions improve analytical thinking.
- Khullar (2025) offered evidence of how misinformation spreads rapidly during public health crises and explained why certain influencers dominate online narratives.
- Kaur and Gupta (2023) supplied a comprehensive overview of misinformation spread, detection, and mitigation across social platforms.
- Cho and Zhu (2023) (Congressional Research Service) clarified how social media platforms disseminate content and how algorithmic systems influence visibility.
- Sánchez-Querubín et al. (2023) contributed detailed observations on political TikTok, including the role of performance, humor, and remix culture in shaping political messaging.
These sources formed the theoretical basis for understanding how algorithmic amplification, emotional manipulation, and fringe ecosystems influence public opinion.
Data Collection
This project involved several types of data collection:
1. Academic Literature Collection
Peer-reviewed articles from 2023–2026 were selected based on relevance to misinformation, platform algorithms, influencer strategies, and political communication. Each article was analyzed for:
- methodology
- key findings
- theoretical frameworks
- relevance to digital manipulation
2. Social Media Case Observation
I reviewed publicly available content from platforms such as TikTok, YouTube, Instagram, and X. This included:
- influencer political posts
- memes
- short-form videos
- commentary clips
- examples of misinformation patterns
This observational analysis helped identify real-world examples that match behaviors described in scholarly research.
3. Cross-Platform Analysis
Guided by findings from Kaur and Gupta (2023), I examined how misinformation narratives tend to move across platforms. This involved:
- tracking how specific messages appeared on multiple platforms
- comparing format changes (memes, videos, screenshots)
- noting how emotional tone shifted during migration
Analytical Approach
The project uses qualitative content analysis supported by multimodal media integration.
1. Thematic Coding
Themes were identified based on recurring ideas in the literature, including:
- emotional manipulation
- algorithmic amplification
- authenticity and influencer appeal
- fringe-to-mainstream pipelines
- vulnerabilities in public understanding
These themes were then applied to real-world media examples.
2. Comparative Media Analysis
I compared academic findings with actual influencer content to determine how theory and practice align. For example:
- Ku et al.’s (2026) findings on critical thinking difficulty were reflected in emotionally charged TikTok videos that discouraged analytical reasoning.
- Khullar’s (2025) description of concentrated misinformation production matched political influencer networks who repeatedly share coordinated narratives.
3. Multimodal Integration
Because misinformation is visual, interactive, and often emotionally charged, the project includes embedded media such as:
- screenshots of posts
- viral videos
- infographics
- charts
- memes
This mirrors the real structure of online influence and makes the research more accessible.
Limitations
This project focused primarily on widely accessible public content, which means the findings do not include:
- private group chats
- paid advertising dashboards
- closed political communities on platforms like Telegram or Discord
Additionally, since algorithms are proprietary, this project analyzes observable outcomes, not internal algorithmic code.
Despite these limitations, the integration of peer-reviewed research and real media examples provides a reliable understanding of how digital manipulation functions.
Conclusion
The methodology used in this multimodal project combines academic research, qualitative media analysis, and real-world digital observation. By grounding visual examples in scholarly frameworks and integrating cross-platform evidence, this project offers a clear and informed explanation of how social media influencers and activists manipulate public opinion in the United States.
This multimodal approach not only reflects how misinformation functions in real life but also makes the research accessible, engaging, and visually informative.