Researchers of the Indian Institute of Science (IISc) have warned that machine-learning and artificial intelligence algorithms used in sophisticated applications such as autonomous applications are not silly and can be easily tampered by presenting errors.

Machine-learning and AI software are trained with the initial set of data such as images of cats and it learns to identify Felin images because such data is fed. One common example is that Google is throwing a better result because more people search for the same information.

AI applications are becoming mainstream in health care, payment processing, deploying drones to monitor the crowd and areas of face recognition in offices and airports.

“If your data input is not clear and obvious, then the AI ​​machine can throw surprising results and it can be dangerous. In the autonomous driving, the AI ​​engine should be properly trained on all road signs, the IISc’s Computational Sciences Department Associate Professor R Venkatesh Babu told ET, If the input sign is different, then it can change the course of the vehicle, which can lead to disaster They said, “The system also requires adequate cybersecurity measures to stop hackers from infiltrating and changing inputs.” Babu and his student Konda Reddy Mopuri and Aditya Ganesan in a paper published in the prestigious Trans. Pattern analysis of IEEE and Machine Intelligence has shown how errors can be introduced in machine-learning algorithms, African Chameleon for missiles, and Bananas can throw different results like a custard apple.

He has shared his algorithm in open source platform for others to work and improve the software.

Need more work on AI

Analysts say that research highlights “Ai’s publicity” and needs more work to improve its efficiency and security. “If the technologies can be confused so easily, then we are in trouble. It is like having a computer that can be hacked easily; Vivek Wadhwa, a prominent fellow at Carnegie Mellon University College of Engineering said, the first There was practically no security in generations. “The big concern here is that the use of early computers is very small and selective. Uh was made by, while AI systems are being deployed on a global scale and reach consumers directly. ”

Children are taking steps to build capabilities in India AI, which is dominated by the US and China. In China, the government has invested in AI and has created models using civil data. Chinese Internet firms such as Tenant and Baido have strong AI practices based on their users’ data. “Business leaders and policymakers have preached that companies like Google are making an AI wants everywhere. But a lot of work still needs to be done and we should not trust the system because its developers say That has AI in it, “said Wadhwa.

V Vinay, co-founder of AT Motors, former computer science professor V Vinay of IISc and AT Motors of an Autonomous Cargo Vehicle Startup said that the matter of deep learning algorithms is a real challenge, where it is not easy to explain why a software fails. Happened or succeeded.

Vinay said, “We keep machines up to a higher standard.” “In the deep learning algorithm there is a lack of interpreters – when it works, we do not know why it works; when it does not work, why do not we do it. Our inability to analyze the failure is an issue is.

Leave a Reply