Hyderabad: CM Revanth Reddy Takes Action Against AI-Generated Fake Content in Land Dispute
In response to the ongoing Kancha Gachibowli land dispute, Chief Minister Revanth Reddy has directed the strengthening of the state's cyber crime department to prevent the spread of fake content generated by Artificial Intelligence (AI).

Hyderabad: In response to the ongoing Kancha Gachibowli land dispute, Chief Minister Revanth Reddy has directed the strengthening of the state’s cybercrime department to prevent the spread of fake content generated by Artificial Intelligence (AI). These AI-generated fake materials have sparked significant controversy and widespread misinformation across social media platforms.
AI-Generated Fake Content Creates Land Dispute Controversy
The Kancha Gachibowli land dispute has taken a new turn with the proliferation of fabricated videos and photos created through AI, which have misled the public and created confusion. Chief Minister Reddy emphasized the need for advanced forensic tools, both hardware and software, to detect and counter the impact of such misleading AI-generated content. He has also instructed officials to pursue a legal inquiry into the creation and distribution of this fake content to ensure accountability and prevent future incidents.
Court Inquiry Requested for Fake Content on Social Media
The Chief Minister reviewed the ongoing cases related to the Kancha Gachibowli land dispute at a meeting with state officials. The controversy gained national attention after fake photos, videos, and audios began circulating on social media, including those allegedly showing bulldozers injuring wildlife. The videos went viral before the truth could be established, with several prominent personalities, including Union Minister G. Kishan Reddy, former minister Jagadish Reddy, and celebrities like John Abraham, Dia Mirza, and Raveena Tandon, unknowingly sharing the content, further fueling the confusion.
Also Read: Trump’s Administrative Reforms: A Bold Approach to Government Efficiency
Journalist Sumit Jha, who initially posted one of the fake videos, later deleted it and issued an apology. However, many others continued to spread the misinformation, adding to the chaos.
AI’s Threat to Democratic Institutions and National Security
Police officials have expressed concern over the growing threat posed by AI-generated content, especially on sensitive issues like the Kancha Gachibowli land dispute. The use of AI to create fake content is seen as a potential threat to democratic institutions, as it can escalate tensions and even lead to conflicts on national security issues, such as border disputes.
The police have warned that AI-driven fake content could act as a catalyst for disputes, much like the potential for propaganda-driven tensions along the Indo-Pakistan and Indo-China borders. The rapid spread of false information through these artificial means could have long-term consequences on public trust and national security.
Unprecedented Controversy Around Kancha Gachibowli Land Development
The controversy surrounding the 400-acre government land in Kancha Gachibowli marks the first time in 25 years that such a dispute has arisen during the land’s development. Previous projects on the land, including the construction of ISB, Gachibowli Stadium, IIIT, and several private buildings, proceeded without significant opposition or environmental concerns. However, the current controversy surrounding wildlife conservation and potential environmental destruction has ignited a public debate that has been amplified by fake AI-generated content.
As Hyderabad grapples with the implications of AI-generated fake content, Chief Minister Revanth Reddy’s directive to strengthen the cybercrime department highlights the growing need for vigilance in combating misinformation. With AI posing a serious challenge to societal integrity and national security, authorities are taking proactive steps to safeguard against future digital manipulation. The government’s focus on strengthening the legal and technological framework aims to prevent further exploitation of AI for malicious purposes.