

Burden of proof? You literally just made your first point about anything I said, I shouldn’t have even wasted the time responding.
You are really staking your argument on thinking companies don’t get consequences for software fuckups?
- Equifax: up to $700M settlement after failing to patch a known vulnerability that led to the 2017 breach
- TSB Bank (UK): £48.65M fine for the 2018 core-banking migration meltdown that locked out customers
- Uber: settled with the victim’s family after its AV killed a pedestrian in testing
- Boeing 737 MAX: paid $2.5B for hiding MCAS flight-control software flaws
- CrowdStrike: shareholders sued after a faulty update locked up Windows PCs worldwide and it lost a shit ton in value over that incident
I’m sure you’ll make up some excuses for why somehow none of these count, but the list is so deep you could debate every example here and 10 more would pop up. Tons more happens behind the scenes too with SLA contracts etc.
This sounds like you knew you were wrong all along but still wanted to be a snide, condescending, and undeservedly arrogant about it.
The fact that some don’t doesn’t mean that none do.
You go do some leg work before requesting it of me after thinking you could just troll your way out of your ridiculous initial comment
Reading your other comments outside this thread, this whole chain seems so illogical. What triggered this bizarre emotional reaction to such a relatively innocuous comment? You must have been reading something in.



Personally I feel that the hate for AI is misplaced (mostly, as I do get there is a lot of nuance regarding peoples feelings on training sourcing etc). Partially because its such a wide catch all term, and then mostly, by far, because all of the problems with AI are actually just problems with the underlying crony capitalism in charge of its development right now.
Every problem like AI “lacking empathy” is down to the people using it not caring to keep it out of places where it fails to accomplish such goals or where they are explicitly using it to strip people of their humanity; something that inherently lacks empathy.
If you take away the horrible business motivations etc, I think its pretty undeniable AI is and will be a great technology for a lot of purposes and not for a lot of the ones its used for now (this continued idea that all UI can be replaced such that programmers wont be needed for specific apps and other such uses).
Obviously we can’t just separate that but I think its important to think about especially regarding regulation. That’s because I believe that big AI currently is practically begging to be regulated such that the moat to create useful AI becomes so large that no useful open source general purpose AI tools can exist without corporate backing. That’s I think one of their end goals along with making it far more expensive to become a competitor.
That being said this is a little bit out of hand in that this was about software in general, and regarding that and AI, I do believe empathy can be included, and built correctly, a computer system could have a lot more empathy than most human beings who typically only have meaningful empathy towards people they personally empathize with in their actions, which leads to awful systemic discrimination reinforcing practices.
As for the flock example, I think its almost certain they got in with some backroom deals, and in a more fair world… where those still exist somehow, the police department would have a contract with some sort of stipulations regarding what happens with false identifications. The police officers also would not be traumatizing people over stolen property in the first place.
That is all to say, I think that often when software is blamed, what should actually be blamed is the business goals that lead to the creation of that software and the people behind them. The software is after all automation of the will of the owners.