On April 16, 2026, Ko Woo-young, a senior researcher at South Korea’s National Security Technology Institute (NSTI), presented findings at the 32nd Information and Communication Network Security Conference (NetSec-KR 2026) in Seoul, revealing that AI-generated fake news can be produced at remarkably low cost and speed. According to Ko’s presentation, creating fake news using generative AI costs an average of 13 won and takes 4 seconds, with 12 fake news items requiring 155 won and 46 seconds on average. Ko emphasized that the proliferation of AI-generated disinformation and malicious comments designed to manipulate public opinion has become a critical threat to society.
Ko highlighted that as generative AI capabilities are increasingly misused, the primary abuse tactic involves opinion and information manipulation. He stressed that with the cost and volume of fake news and fake comments becoming so low and abundant, society has reached a point where distinguishing truth from falsehood is extremely difficult.
According to Ko’s analysis, when fake information becomes ubiquitous, members of society become fatigued by disinformation and begin to lose interest in reality. This phenomenon leads people to question even authentic information. Ko noted that “punishment for fake news is legally difficult to enforce unless it results in financial gain,” and emphasized that “generative AI technology is evolving too rapidly, and institutional improvements are necessary.”
Ko Woo-young presenting on the risks of AI-generated fake news at NetSec-KR 2026
Choi Seok-woo, Director of the NSTI, presented on “AI-Based Malware Analysis Technology” at the same session. According to Choi’s findings, as AI begins to be used to generate malware, approximately 450,000 new malware samples are created daily. The cumulative total of malware has surpassed 1 billion instances.
In response to this escalating threat, Choi advocated for the development of AI-driven solutions, including AI-based analytical support systems, autonomous analysis agents based on large language models (LLM), and automated deobfuscation tools.
Choi Seok-woo presenting on the necessity of AI-based malware analysis technology
Ji Hyun-seok, Senior Researcher at the NSTI, presented on “The Era of LLM-Based Software Security Vulnerability Detection.” Ji’s research examined how LLMs detect security vulnerabilities and found significant limitations in current capabilities.
Ji stated: “In recent cases, while AI has identified numerous vulnerabilities, the analysis results show this is not actually the case. Effective vulnerability detection was only possible when LLM and AI models were provided with dedicated vulnerability detection tools.” According to Ji, LLMs currently face several constraints, including limitations in handling large codebases, data dependency issues, and unreliable reasoning.
Ji projected that synergy would be maximized when skilled security analysts capable of directly identifying vulnerabilities could leverage LLM tools. However, Ji emphasized that “LLM vulnerability detection is not yet complete. Better methods for finding vulnerabilities must be explored.” The presentation underscored that LLM-based vulnerability detection cannot yet be performed effectively by AI alone, and human expertise remains essential for reliable security analysis.
Ji Hyun-seok presenting on LLM-based vulnerability detection research