Top 10 famous AI disasters
The artificial intelligence disasters now a days can be noticeable in which the applications of AI has led to the negative consequences of Pre existing issues.
Artificial intelligence has multifaceted impact on the society that is extending from the transforming industries to ethical and environmental concerns as well. As we all know nowadays the AI is holding its promise of revolutionising many sectors by increasing efficiency enabling the innovation and opening new possibilities in so many areas of our life.
One of the significant impact of AI is on the disaster management or its reduction DRR where it aims at early recognition and warning systems that helps in projecting potential future trajectories of disasters.
In some critical domains also raised the ethical social and political issues that emphasize to design the new AI system that should be equitable and inclusive.
AI has also it will influenced employment and nature of work atros the industries. With the advancement in AI, there is transformity potential to automate and augmented the business process but still the AI technology cannot replace the human the AI technology cannot replace the human expertise in many fields.
With ever increasing demand and awareness of environmental food friends of AI and necessity to embrace the potential climatic implications of white spread AI adoption.
In order to maintain the social and ethical values the AI development will faces the challenges like and shareing data privacy and secured avoiding bias in elaboration and maintaining accessibility and equity. The AI should have the transparent decision making process and it should insures that AI serves the need of all the communities and the groups.
Here Top 10 famous AI disasters
01. generative AI in legal research
an attorney named is Stephen A.Schwartz used open AI chat GPT aur legal research which led to the submission of at the 6 month existence cases in the lawsuits brief against Columbian airline Avianca .
The brief included fabricated names dock numbers internal citations and quotes. The chat GPT resulted in a fine of $ 5000 for both Schwartz and his partner Peter Lo Duca and the dismissal of the lawsuit by the use district judge P. Kelvin castle.
02. Machine learning in healthcare
The developed AI tools held in aiding hospitals in diagnosing or tracking the patience condition. The lack of training errors it is found to be inaffective in triaging covid-19 patients.
The UK turing institute reported that these predictive tools made little to a no longer difference. The failures of AI tools may happen from the use of mislabelled data or data from the unknown resources.
There is an example that includes a deep learning model for diagnosing covid19 that was trained on a data set with a scans of the patient in different positions and was unable to do accurately diagnosing the virus due to these inconsistencies
03. AI in estate at Zillow
Zillow a utilised machine learning algorithm to predict home crisis for its Zillow offers program which AIIMS that buying and fliping home efficiencies.
The however this all over them had a median error at the rate of 1.9% and in some cases as high as 6.9% that leads to the purchase of home crisis that exceeded in their future selling prices.
This miss judgement result writing down $305 million dollar in inventory and lead to A work force reduction of 2000 employees or approximately 25% of the company.
04. Bias in AI recruitment tools
Emergence case is not to tell in the provided sources but it could be referred similar issues of bias and recruitment tools. It is notable that AI algorithms can not be meaningfully but incorporate biases from the data they are trained on.
In the recruitment of AI tools which means if the trading data sets have more resumes from one demographic such as men dialogue rhythm might show preferences to those candidates leading to discriminatory hiring practices.
05. In recruiting software at I tutor group
The eye tuter groups is AI empowered recruiting software which has been programmed with the criteria that led to reject job applicants based on age. This software especially discriminate against female applicants aged 55 and over and male applicants aged 60 and over.
This result in over 200 qualified candidates being unfairly dismissed by the system the US equal employment opportunity commission EEOC to action against attitude group with leads to a legal statement. This I did you to group by the action of EEOC agreed to pay $365,000 to resolve the lawsuit and was required to adopt new and the discriminatory policies as a part of the settlement.
06. Sports illustrated may have published AI generated writers
In November online magazine future is sports illustrated was publishing articles by AI generated writers.
Futurism was cited anonymous sources where involved in creating their content and said the storage sport magazine published the lot of fake authors with some articles under those fake authors by lines generated by AI as well. After questioning from the arena group subsequently remove the article in the question from the sports illustrated website.
Responding to the future is a vertical the sports illustrated union posted the statement that it was horrified by the allegations and demanded answers and transparency from arena group management. The SI union also said that these practices will violet everything we believe in about journalism so it should be the floor to being associated with something so dis respectful to our readers.
07. Data set in Microsoft chatbot to spew racist tweets
In March ,Microsoft learned that Twitter interactions as data training for ML algorithms can have dismaying results.
Tay, the microsofts most advanced jackpot declared after 24 hours of learning from human interactions that Hitler was correct to hate the Jews on Twitter. The aim was just build a slang filled chat what that would raise machine human conversation quality to a new level. However it was reveal to be robot parrot with an Internet connection.
The jackpot was made one of the top of the companies AI technology stack, but the horse reality behind this seems to have ruin The innocent artificial intelligence world view.. that an intelligent electrician of how data made Amazon AI model built in a clean lave environment without immunity to detrimental outside impact. This is the best example that belongs to another popular AI failure.
08. AI for secure system access by face can we treat with mask
According to Apple the face ID has been created by a three dimensional model of your face using the iPhone X powerful front facing camera and the machine learning. Everyone should make sure no one is wearing a mass with your face if you have an iPhone X with a face ID.
The machine learning AI component to has been allowed by the system to adjust the aesthetic changes such as putting on a makeup doing appear of classes or wrapping scar found your neck while maintaining the security.Bkav, one of the security business located in Vietnam discover by detaching to the eyes to a 3D mask they could successfully unlock a face ID equipped iPhone.
The stone powered mass which cost approximately 200 US dollars was created. The eyeballs were simply infrared it pictures printed on paper. Wired on the other hand attempted to defeat face ID mask using mass but enable to replicate the results.
09. AI believes that members of Congress resemble criminals
Amazon is no responsible for another face recognition blunder. It’s AI system was mean to detect of endorse based on their facial image but when it was put on the test using a batch photos of the members of Congress it prove to be not only incorrect but also really prejudiced.
According to ACLU American civil liberty’s union, ALMOST 40% OF RECOGNITION SYSTEM ERROR IS MAST IN OUR TEST WHERE OF PERSONS OF COLOUR EVEN THOUGH PEOPLE OF RACE MAKEUP JUST 20% OF CONGRESS. It’s unplayer if it was fault with the non white face recognition or if training data was skewed. Does depending on AI to determine whether or not a person extreme in it would be just crazy.
10. AI driver card malfunctions on the tarmac
On the tarmac, AI driver food cart malfunction out the control and a weak closer to a vulnerable aeroplane for get a gate. Finally yellow fest worker was able to stop the card by hitting it if another vehicle and knocking it down this is the case little of the rail. The card was neither make a nice day nor control by artificial intelligence in any manner this is also considered one of the popular AI failures.
11. Air Canada pace damages for chatbot lies
In February Air Canada was offer to pay damages to the passengers after its virtual assistant given the incorrect information at a particularly difficult time. Jet Moffat consulted air Canada’s virtual assistant about the be revenant fairs following the death of his grandmother in November. The chat more told him he could buy regular price ticket , following all the instructions he purchased a one way ticket Toronto and return flight to Vancouver. But when he submitted his refund claim the airline turned him down.
Moffatt took Air Canada to tribunal in Canada climbing the airline was negligence and mis represented information through its virtual assistant. She was tonight that argument saying that airline did not take responsible reasonable care to insure it chat about was correct so he ordered the airline to pay more fat 812 CA dollar including CA 650 $ in damages.
In spite of all these failures the AI is helping in disasters management. Recent advancement in the machine learning and artificial intelligence are loving the researchers engineers and scientist so easily access and analyse new and more extensive data sources that is more than ever before. The role of AI in disaster relief is to help governments and relief agencies through large volume of complex and fragmented data to generate useful information that can act on more quickly than ever before. Also helpful in early diagnosing of the climate change and helping the government and agencies struggle to co-ordinate effective relief programs.
There should be ethical concerns for organisations that should be considered for planning of AI regarding data privacy and security, biases in ai ,accessibility and equity, accountability and decision making, over Reliance on technology, infrastructure and resource constraints.
AI is not inherently causing disasters in society but their had some notable instances we are applications of AI has late to the negative impact or pre existing issues. Sweet is very important that while considering this real concerns they represent challenges to be adjust within the field of AI development and deployment rather than AI actively causing disasters.