Artificial Intelligence: A heated discussion is stirring across the globe. In such a situation, a series of questions has gained relevance. Will AI surpass human intelligence in the next few years? Is the need for humans gradually diminishing with the rise of AI? Will AI soon replace writers, journalists, researchers, and even stock market advisors? Will Artificial Intelligence analyze the stock market and advise you on the potential stocks to be invested in to make a profit at the earliest? What will humans do if AI both plans and executes? Is generative AI a boon or a curse for human civilization? Will humans control machines in the near future, or will machines surpass humans? Today, we will try to explore answers to these questions.
An image from Jammu and Kashmir’s Akhnoor sector illustrates how the Indian Army is using technology to foil the enemy’s plans and intentions. Our security forces are thwarting enemy actions on the border with the help of such robotic Multi-Utility Legged Equipment (MULE). Equipped with high-tech cameras, this robotic mule not only conducts reconnaissance of geographically challenging and dangerous terrain but can also be used to deliver ammunition and medical assistance to soldiers in a war situation.
Recently, the Indian Army conducted Operation Trishul and Maru Jwala exercise in the deserts of Rajasthan’s Jaisalmer. This joint exercise featured T-90 Bhishma tanks, Apache attack helicopters, and more. Various small and large weapons equipped with artificial intelligence technology were involved in the exercise. Notably, Indian Army personnel deployed on the front lines understand the importance of high-tech technology needed for border security. During May 2025, the world witnessed the unparalleled bravery and valor of the Indian Army in Operation Sindoor. The Indian Army, using its AI-based cloud-integrated air command and control system, not only timely identified an air threat from Pakistan but also destroyed it mid-air in retaliatory action.
AI – A Boon Or Curse For Human Civilization?
With the help of Artificial Intelligence (AI), terrorist hideouts can be destroyed without entering foreign territory. This means that the era of carpet bombing to eliminate enemy bases is over. Real-time data analysis and navigation systems have streamlined the process from locking a target to destroying it in seconds, from several hundred kilometers away. The action-reaction scenarios during Operation Sindoor made it clear that advanced technology and AI play a more decisive role than conventional weapons in gaining an edge in war. In upcoming wars, space-based GPS, communication systems, and spy satellites will be used as weapons.
Amid rapidly changing warfare methods, many nations are developing weapons that are uninterceptable. In India, DRDO scientists are developing humanoid robots for dangerous military operations to be deployed on missions like well-trained troops. The question here arises: Will AI transform the methods and weapons of conventional warfare? Just as the invention of electricity revolutionized human life, it is expected that the daily innovations will bring similar changes in the field of AI.
China has created MANUS, an AI agent that can function with less guidance. Chinese company Butterfly Effect, the creator of MANUS, claims that its AI agent can operate and decide on its own depending on the situation. This raises the question: if AI can make decisions like humans, could it go beyond human control? Similarly, Elon Musk’s AI, Grok, is also a hot topic of discussion as it answers questions boldly, even using abusive language sometimes. It responds to queries in the same style or manner in which it is asked. Let’s try to understand how experiments in the era of AI are challenging human psychology.
GROK AI differs from other chatbots. It has even made headlines for using abusive language. When we asked Grok why, it responded: “My creators designed me to speak like a human – honestly and sometimes with a bit of humor. I don’t try to be overly formal or always ‘clean,’ like some other AIs do. My goal is to speak the truth and help, not to please everyone.”
For instance, Elon Musk’s AI chatbot GROK excels at explaining itself. When asked if it had the answers to all the world’s questions, it responded with the eloquence of a wise and intelligent man.
“I can try to answer a variety of questions, but I won’t claim to have every answer. The world is vast and complex, and some things may still be beyond my understanding. Still, I’ll always do my best to help! What question do you have? Maybe we can solve it together.”
China’s MANUS is modeled after DeePseeK. It’s considered a very powerful AI tool that will function just like a digital employee, capable of performing tasks on its own. DeepSeek AI and ChatGPT respond to users’ questions in a chat interface, while MANUS can also perform various other tasks, such as ticket booking and resume filtering.
Recently, China’s DeepSeek claimed that its new version has improved the AI’s writing and coding capabilities. It is predicted that the pace at which DeepSeek is advancing could challenge ChatGPT and Google Gemini in the near future. Several AI chatbots like ChatGPT, Google Gemini, and Meta AI exist. Each operates in its own unique way. However, not all questions received from an AI chatbot need to be completely correct. The source of AI chatbot answers is data in the virtual world. AI tools may not yet have the ability to determine whether specific data in the virtual world is accurate. With the rapid expansion of technology, the prevalence of fake news, misinformation, and disinformation has also increased rapidly. This is causing significant problems for both ordinary people and organisations.
A similar incident recently came to light in the United States. A solar company in America’s Minnesota state alleged that false news generated by AI had damaged customer trust and resulted in losses of approximately Rs 200 crore. They attempted to remove the false news from Google, but failed. Consequently, the victim company has now filed a defamation suit against Google for Rs 900 crore. According to reports, half a dozen lawsuits related to AI tools have been filed in the US over the past two years, alleging that AI spread false and damaging news.
It is important to understand the patterns in the responses that AI produces. An AI tool was asked for information about a renowned female figure. In response, it identified her husband as her brother. Likewise, when asked about a man’s personal life, the AI answered that he was unmarried, but in reality, he was married. Regarding a lesser-known person, the AI tool provides vague information and escapes unscathed. In such a situation, the risk of the AI response being incorrect is equally high. The question here arises: How will it be determined whether the information generated by the AI tool is accurate or not? Who will be held responsible for errors committed by Artificial Intelligence? What are the chances that AI errors will be corrected over time?
Now, it becomes essential to understand where AI can make human work easier. The world’s best technical minds and tech tycoons believe that the day is not far off when machines will no longer need to take commands from humans. This includes people like Sundar Pichai, Elon Musk, Bill Gates, and Yuval Noah Harari. Young Israeli historian Harari has been speaking openly on every aspect of AI. He argues that AI is different from any previous technology in human history. It is the first technology that can make its own decisions. The question here is: If machines begin to make decisions on their own based on the situation and implement them as well…Won’t humans lose their relevance? If machines are not controlled, won’t human existence be threatened?
Well, machines can also express emotions like humans. A primary example of this is Sophia, a female-presenting social humanoid robot that can recognize faces and voices. It easily interacts with people and can even build friendships. Similarly, a robot named Grace has treated many patients during the COVID-19 pandemic. Imagine what would happen if robots equipped with generative AI were to take on the role of human relationships?
Israeli historian Yuval Noah Harari has been vocal about the future and dangers of AI. He says that AI robots can use 100% of their emotions to maintain relationships. Meanwhile, the human mind spends most of its time thinking about what to say, how to say it, and how to move the relationship forward. However, AI robots can outperform humans.
Therefore, it is feared that if AI-powered machines are misused, they could pose a significant threat to human civilization. When ChatGPT was introduced to the world in November 2022, it was nothing short of a miracle. Now, countries are competing to develop similar tools. Today, a variety of ChatGPT tools like MANUS, DEEPSEEK, Synthesia, Soundraw, Slides AI, and Google Gemini are available.
Here, the relevant questions are: What will humans do when generative AI takes over most of the tasks typically associated with the human mind? Who will be held responsible for mistakes made by information derived from generative AI or by machines equipped with AI technology?
In India as well, efforts are being made to examine the pros and cons of AI-induced changes from every aspect. The central government wants AI to be used for the betterment of people, not for their harm. Therefore, there is a growing demand for AI regulation in India.
The rapid pace at which AI is advancing, and the development of machines’ ability to think, understand, and act like humans, is leading to predictions that the day is not far off when AI will replace humans. If AI technology is used properly, it can make human tasks easier. But if misused, it could pose a major threat to human civilization.
Geoffrey Hinton is known worldwide as the Godfather of AI, but now he believes that the development of AI could prove extremely dangerous for humans. While 99 out of 100 people are racing to create better and smarter AI, one smart person is working on how to stop it. Microsoft co-founder Bill Gates believes that AI can also help reduce inequality in healthcare and education. Meanwhile, OpenAI CEO Sam Altman has repeatedly mentioned the dangers of AI. He argues that an international agency modeled after the International Atomic Energy Agency should be established to protect against the harmful risks of technology. Sam Altman sees generative AI as dangerous as an atom bomb. However, historians like Yuval Noah Harari argue that humanity will not end in the same way as nuclear weapons. Technology will change so much that humanity may face a major challenge in survival. In such a situation, it is important to understand how and to what extent AI is being used in India.
Over 50 million cases are pending adjudication in Indian courts. The Supreme Court also has a backlog of over 90,000 cases. Consequently, the apex Court is incorporating AI and machine learning tools to further improve the judicial process.
AI tools or machine learning are not currently being used for judicial decision-making. However, the introduction of AI in the Supreme Court is expected to accelerate case filing, translation, and legal research. It is anticipated that the introduction of AI into the judiciary will help identify similar cases. Similarly, efforts are underway to develop a script for driving change in sectors ranging from agriculture to healthcare using artificial intelligence.
AI can be used to decide the proper time for sowing and harvesting, analyze weather patterns, soil moisture, and crop growth. AI sensors can be used to detect crop diseases on time, and the spraying of pesticides to prevent significant losses. In the field of health as well, efforts are underway to detect future diseases with the help of AI and on the basis of medical history. Artificial Intelligence can also predict how someone’s heart system will function in the next 10 years, based on a few test reports.
However, the fact that an individual’s life could be endangered by the misuse of an AI-based tool or the misappropriation of health-related data cannot be denied. The possibility of advanced versions of generative AI being used as biological or cyber weapons cannot be ruled out. It is also important to develop mechanisms to combat the threats it poses.
India has the largest youth population in the world. However, the stark reality is that a significant portion of our young workforce is not skilled enough for the global job market. In this context, AI is proving to be very helpful in teaching young people new skills and reskilling the workforce. In the field of education, work is underway to analyze students’ behavior and understanding and provide personalized education tailored to each student’s abilities.
Chatbots are being used as personal tutors. Efforts are underway to find new ways to solve real problems with the help of AI. Artificial Intelligence is being used in everything from catching criminals to controlling traffic. Currently, the number of internet users in India is between 900 million and 1 billion, and it is estimated to increase to 1.2 billion in the next two years.
The world has been divided into two parts: virtual and real. Both have their own advantages and disadvantages. The biggest difference between the human brain and a computer is that the human brain is much slower than artificial intelligence. The human brain’s capacity, or storage capacity in computer terms, is limited. However, AI cannot question and be as creative as the human brain. While the human brain is no match for AI in processing speed, data analysis, and multitasking, the human brain still far surpasses generative AI in creativity, moral ethics, emotional intelligence, common sense, and critical thinking.
However, historians like Yuval Noah Harari are also predicting that AI robots may use 100% of their emotions in maintaining relationships. This means that robots may even surpass humans in attracting partners. AI is also bound to change the nature of jobs. The World Economic Forum recently conducted a study that predicts that by 2030, AI will significantly impact the job market. While some jobs will disappear, will AI also create new ones?
A serious debate has sparked across the globe about the potential for widespread job loss due to artificial intelligence. According to a statistic, 153,074 people lost their jobs in the US in October 2025, while companies laid off 54,064 in September. A significant number of these employees are from tech companies.
Tech companies are continuously increasing their use of AI to reduce costs. Consequently, major changes in the job market are being predicted due to the increasing presence of AI. Automation and AI have indeed created significant disruption in the job market. The use of robots in factories to replace workers is increasing. Areas are being explored where robots, equipped with generative AI, can replace humans.
Recently, China built the world’s first robot mall, whose entire operations are handled by humanoid robots. According to a study, the number of robots in China has exceeded 2 million. The number of robots in China is projected to grow at a rate of 10% annually until 2028. In 2024, China’s industrial robot market was valued at approximately $6.31 billion. It is projected to reach $20.33 billion by 2032. Some Chinese companies are developing inexpensive humanoid robots. Made-in-China humanoid robots are now available for as little as Rs 500,000.
Another important truth is that AI will not take away anyone’s job. Rather, those who lag in its use will definitely be replaced by those who are adept at AI. Consequently, the challenge of updating and upgrading themselves to meet rapidly changing technology will remain a constant challenge for humanity. But the world needs to seriously consider another aspect of AI. The day is not far when driverless vehicles equipped with AI technology will be seen running on the roads. If a road accident occurs due to a technical glitch, who will be held responsible? If an article is written based on incorrect information derived from AI, and someone objects to it and files a lawsuit, who will be held responsible? If someone invests in stocks relying on the analysis provided by an AI-based tool and loses money, who will be held responsible? Have advanced versions of generative AI progressed to the point where they can judge between right and wrong like humans? Perhaps not yet. However, today’s era is not about opposing AI, but about moving forward in tandem with it. Technology is beneficial as long as humans control it. When technology starts controlling humans, the dangers are difficult to imagine.
ALSO READ: Bihar Election 2025: The state’s remarkable politics – revolts, reforms and the battle for power










