A new debate is taking pace globally. Will Artificial Intelligence(AI) surpass human intelligence? Is the need for humans decreasing day by day in the era of AI? Will AI be responsible for job loses among jobs of writers, journalists, researchers, share market advisors? Will AI be responsible for job losses among writers, journalists, researchers, and stock market advisors? Can AI analyze the stock market and predict the best investments for maximizing profits in the shortest time? If AI will take that place including the planning and execute it, what will happen to humans? According to a news report this week, even the Supreme Court has started using AI and machine learning-based tools to expedite case resolutions. This means that beyond agriculture, healthcare, and policing, AI’s influence is now expanding into the judiciary as well.
After ChatGPT, China’s DeepSeek created a stir. Elon Musk’s GROK is now a hot topic being discussed worldwide. The way GROK behaved a few days ago made people appreciate how it answers questions. The world is also looking at the recently launched AI agent Manus from China with astonishment. It is claimed that Manus can think, plan, create websites, analyze stock market trends, and provide investment advice—essentially functioning as a digital employee.
But the big question remains: Who will take responsibility for the mistakes made by generative AI tools? Is generative AI a boon or a curse for human civilization? In the coming days, will humans control machines, or will machines overtake humans?
Why AI Might Slip Beyond Human Control?
First, let’s talk about China’s MANUS. Manus is being seen as an AI agent who is capable of working with minimal commands. Butterfly Effect- the company behind its creation, claims that MANUS can think independently and make decisions based on the situation. This raises an important question: If AI can make decisions like humans, Will it also go out of human control?
Similarly, Elon Musk’s AI, GROK, has become a hot topic of discussion. It answers most questions fearlessly, sometimes even using abusive language. Interestingly, GROK’s responses often mirror the style in which questions are asked, making its behavior a subject of debate.
First of all, let us understand how experiments in the world of AI are challenging the human mind. It is also important to analyse the pattern of responses generated by AI tools. For instance, when an AI tool was asked for information about a famous female personality, its response was somewhat satirical, incorrectly mentioning her husband as her brother. Similarly, when asked about a person’s personal life, the AI claimed he was unmarried, even though he was actually married. AI tools tend to provide vague or inaccurate information about less famous individuals, often avoiding a direct answer. This raises concerns about the reliability of AI-generated information, as the chances of receiving either a correct or incorrect response are equally likely.
How AI Is Challenging The Human Mind
Now, the key question is: How can we determine whether the information provided by AI is accurate or misleading? Who will be held accountable for AI’s mistakes? Moreover, what are the chances that AI errors will improve over time? It is crucial to understand where AI can simplify human tasks and where it might mislead users with incorrect responses.
Will Human Existence Be At Risk?
The world’s leading technical minds and technology tycoons like Sundar Pichai, Elon Musk, Bill Gates, and Yuval Noah Harari have often stated that the day is not far when machines will no longer need commands from humans. Israeli historian Yuval Noah Harari has been outspoken about every aspect of AI. He argues that AI is unlike any previous technology in human history—it is the first technology capable of making its own decisions.
Now, the question arises: if machines start making decisions based on situations and implementing them independently, what will be the relevance of humans? If machines are not controlled, wouldn’t it be true that human existence could be at risk?
Will The Development Of AI Prove Dangerous For People?
Geoffrey Hinton is widely regarded as the “Godfather of AI.” However, he now believes that AI development could pose significant dangers to humanity. Currently, 99 out of 100 people are striving to create better and smarter AI, while only one is focused on figuring out how to stop it.
Microsoft co-founder Bill Gates sees AI as a tool that can help reduce inequality in healthcare and education. At the same time, OpenAI CEO Sam Altman has repeatedly highlighted the risks associated with AI. He argues that an international agency, similar to the International Atomic Energy Agency, should be established to mitigate AI’s negative impacts. Altman believes generative AI is as dangerous as an atomic bomb.
Meanwhile, historians like Yuval Noah Harari suggest that the downfall of humanity will not come from nuclear weapons but from the rapid advancements in technology. He warns that technology could evolve to a point where humans struggle to maintain their existence—potentially transforming or even destroying everything.
Given these concerns, it becomes crucial to examine how AI is being used in India and to what extent it is being integrated into various sectors.
Virtual Versus Real
The world has been divided into two parts – Virtual vs. Real. Both have their own advantages and disadvantages. The biggest difference between the human brain and the computer is that the human brain is much slower than Artificial Intelligence. If we try to explain the capacity of the human brain in computer language, it can be said that its storage capacity is limited. But the way the human brain can ask questions and be creative, AI does not have that capability.
Even though the human brain cannot match AI in processing speed, data analysis, or multitasking, in terms of creativity, moral ethics, emotional intelligence, common sense, and critical thinking, the human brain is still far ahead of generative AI. However, historians like Yuval Noah Harari are also predicting that AI robots may use 100% of their emotions in maintaining relationships. In other words, in the future, robots may surpass humans in attracting others with the same emotions that humans have.
Due to AI, the nature of jobs is also expected to change in the future. The World Economic Forum recently conducted a study that said by 2030, AI will significantly impact the job market. Some jobs may disappear from the market, while at the same time, new jobs will also be created due to AI.
Humans Should Also Remain Updated And Upgraded With AI
It is true that AI won’t take anybody’s job away, but the real issue is that humans who can’t use AI will be replaced by those who are experts in using Artificial Intelligence. With the rapid pace of technological advancements, humans will need to stay updated and upgraded constantly to keep up with the ever-changing technology. This will always be a priority.
However, the whole world needs to seriously consider the other side of AI, because the day is not far when driverless vehicles equipped with AI technology will be seen running on the roads. In such a situation, if a road accident occurs due to a technical glitch, who will be held responsible for it?
Who Will Be Responsible For The Mistakes Of AI?
If an article is written based on wrong information extracted by AI and someone objects to it and files a case, then who will go to jail? If a person invests in shares trusting the analysis obtained from an AI-based best tool and loses his money, then who will be held guilty? Has the advanced version of generative AI progressed so much that it has the ability to decide between right and wrong like humans? Maybe not yet, but today’s era is not of opposing AI, but of moving ahead in step with AI. Technology is better and helpful as long as humans control it. When technology starts controlling humans, then it is difficult to even imagine the danger.