Newsroom

Does Artificial Intelligence need an ethical code? -Part 1

The following is a brief overview of the ADgully article highlighting codvo.ai. You can also read the entire article by clicking on the link.

Jason M Allen was the winner ofthe digital category at the Colorado State Fair. He created the artwork usingMidjourney, an AI program. The art world is still divided on the ethics ofartificial intelligence-generated art. In Ukraine, Russia appears to have usedKalashnikov ZALA Aero KUB-BLA loitering munitions powered by AI (drones).Though official confirmation was difficult to obtain, particularly from aregime like Russia, unmanned AI-powered robotic weapons have become a repugnantreality. Deepfake videos have now become prevalent, gaining notoriety for theirlethal effects on society. Deepfakes are fake videos of real people sayingthings they did not say in life. Combining AI and machine learning createsartificial intelligence.

Artificial intelligence (AI) hasthe potential to significantly improve our lives. However, nefarious elementsuse it for more sinister purposes, such as disinformation and deepfakes. Howcan the world make sure that AI is used ethically and fairly?

According to Amit Verma, ManagingPartner at Codvo.ai, large and small businesses are collaborating to figure outhow to prevent unethical AI use. According to him, there are only three ethicalAI practices: Individual rights and privacy must be protected, and it must benon-discriminatory and non-manipulative.

Devang Mundhra, Chief Technology& Product Officer at KredX, believes that raising awareness is vital. Hebelieves it is critical to raise awareness about technology and how it can beabused. Firms must be open (both internally and externally) about how they useAI.

Siddharth Bhansali, the founderof Noesis. Tech and the CTO of XP&D Land and Metaform, is an advocate forstringent regulations in this area. He promotes open and effectivepublic-private partnerships to ensure using artificial intelligence inproducts.

Ai has the potential to be usedfor both great good and great harm. AI, like any other technology, is'value-neutral,' according to Munavar Attari. Governments must encouragetechnology vendors to self-regulate. Companies must begin incorporating AIexposure and use into their "code of ethics." The public debate onsocietal ethics and how it manifests itself in AI-powered products will becritical.

Amit Verma believes that a widelyaccepted AI governance model is required. Governments must incorporateanti-discrimination legislation into AI and create an ethics framework. Toaddress ethical issues such as bias, privacy, and discrimination, policymakersmust strike the right balance within their policies. Governments must make surethat the way AI is developed and distributed reflects the world at large, hesays. He also believes that diversity in AI company leadership teams should bemandated.

The private sector can createorganizations that recognize the potential uses and misuses of artificialintelligence. According to Amit Verma, enterprises should incorporatevalue-based principles into their internal AI activities. According to DevangMundhra, there is no silver bullet, but citizen trust and ecosystem credibilityare critical. Some ethical AI technology goals could include detecting fakenews, creating fashion designs, diagnosing rare diseases, developing a newvaccine, and so on. Identifying system biases and training models on morediverse data can help mitigate the negative impact of AI solutions. The publicsector must support the private sector.

You may also like