AI, for whatever the reason, is garnering a great deal of attention of late. AI has been with us for some time despite all the recent hype. AI has been used to enhance productivity, analyze data as well as make decisions in place of humans. While delivering some ever increasing impressive results, thinking of it as an end-all solution should be considered with some prudence. AI has made countless mistakes resulting in not only financial loss, but loss of life as well.

AI is being deployed in the financial markets right down to the consumer level, and is impact everyone that has a retirement account and more so anyone who relies on their companies financial success for their own financial well being. The Flash Crash of 2010 impacted the entire market place with millions in wealth being eliminated. On May 6, 2010, the U.S. stock market experienced a sudden and severe drop, commonly referred to as the “Flash Crash.” It was triggered by a combination of factors, including the use of algorithmic trading strategies by high-frequency trading (HFT) firms. These algorithms, designed to execute trades at high speeds, exacerbated the market downturn by reacting to rapidly changing market conditions. The Flash Crash resulted in a temporary loss of nearly $1 trillion in market value before recovering – much of which was redistributed, not necessarily equitably amongst the market.
Procter & Gamble, a multinational consumer goods company, witnessed a sharp decline in its stock price during the Flash Crash. At one point, its shares plummeted by nearly 37% before recovering. Although the drop was temporary and the stock regained its value, it caused panic among investors and highlighted the vulnerability of even well-established companies to sudden market disruptions.
Accenture, a global professional services company, also experienced a dramatic decline in its stock price during the Flash Crash. The company’s shares briefly dropped by approximately 99%, from around $40 to just a few cents per share, before recovering. The extreme price anomaly was a result of erroneous trades executed during the chaotic market conditions. Again, while the impact was severe, Accenture survived the incident.
3M, a multinational conglomerate known for its diversified products, was another company affected by the Flash Crash. Its stock price experienced a sharp decline of around 22% before recovering. Similar to other companies, 3M’s stock volatility during that period reflected the widespread market turmoil rather than fundamental issues specific to the company.
On an investment firm level the Knight Capital Group’s Trading Glitch was likewise a disaster in the making attributed to AI. In August 2012, Knight Capital Group, a prominent market maker and trading firm, experienced a significant trading glitch caused by an erroneous software deployment. The faulty software sent numerous unintended orders into the market within a short period. As a result, Knight Capital incurred losses of approximately $440 million in just 45 minutes, leading to the firm’s near-collapse and subsequent acquisition.
Worse than the loss of money is the loss of life as with Uber’s self-driving car accident in 2018. In March 2018, an Uber self-driving car operating in autonomous mode struck and killed a pedestrian in Tempe, Arizona. Inadequate safety protocols, software flaws, and human error was attributed. The AI system in the Uber self-driving car struggled to accurately identify and classify the pedestrian, and the safety driver behind the wheel was not actively monitoring the road as required. While the absence of human intervention was attributed to this accident, that is not always so as in the case of the Boeing 737 Max.
The Boeing 737 Max was equipped with an automated system known as the Maneuvering Characteristics Augmentation System (MCAS), which relied on AI algorithms to prevent the aircraft from stalling. However, a combination of design and software flaws led to two fatal crashes: Lion Air Flight 610 in October 2018 and Ethiopian Airlines Flight 302 in March 2019. These crashes resulted in the loss of 346 lives.
The failure of the MCAS system in the Boeing 737 Max case highlights the limitations and potential dangers associated with AI in safety-critical domains. The system relied on sensor data to make critical decisions regarding the aircraft’s control, but a faulty sensor reading triggered erroneous responses from the MCAS, leading to repeated nose-down commands and loss of control by the pilots.
Several factors contributed to this failure, including inadequate training and communication to pilots about the system, limited redundancy in the sensor inputs, and the absence of a fail-safe mechanism to override the automated system’s commands. The incident raised concerns about the lack of proper testing, validation, and regulatory oversight of AI systems in the aviation industry.
These examples underscore the need for careful consideration and thorough evaluation of AI systems in safety-critical and financial applications. The examples emphasize the importance of human oversight, robust system design, rigorous testing, and comprehensive risk assessment to prevent catastrophic failures that can result in loss of life, property, financial security and trust in AI technologies.
It is worth noting that while AI has the potential to improve safety and efficiency in various domains, incidents like the Boeing 737 Max crashes serve as reminders that the deployment and integration of AI must be approached with caution, ensuring proper safeguards, and adherence to rigorous standards and regulations. The lessons learned from such failures are crucial in driving improvements in AI technology and fostering responsible and ethical AI practices. Until then the importance of human element in controlling AI is paramount. Human supervision trumps AI decision making.
Think twice before fully trusting AI and machine learning. It could be the difference between financial ruin or someone being killed.