Dr. Lance B. Eliot is a Stanford Fellow and a world-renowned expert on Artificial Intelligence (AI) with over 6.4+ million amassed views of his AI columns. As a seasoned executive and high-tech entrepreneur, he combines practical industry experience with deep academic research and serves as a Stanford Fellow at Stanford University. Formerly a professor at USC and UCLA, and head of a pioneering AI Lab, he frequently speaks at major AI industry events. Author of over 50 books, 750 articles, and 400 podcasts, he has made appearances on media outlets such as CNN and co-hosted the popular radio show Technotrends. He's been an adviser to Congress and other legislative bodies and has received numerous awards/honors. He serves on several boards, has worked as a Venture Capitalist, an angel investor, and a mentor to founder entrepreneurs and startups.
As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack.
I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.
Fortunately, there are various laws about data and privacy that have been gradually enacted. The laws differ markedly from country to country. They can also differ from state to state. But at least there is an awareness of the dangers associated with having data in databases that are able to rapidly spread information about you. The hope is that you will have a legal recourse to try and prevent false information or at least be aware that it exists about you.
The short answer is “No” and we are going to say that they did not conspire per se. Instead, they all happened to make use of the algorithm Y. They didn’t “conspire” in the sense of gathering in a backroom and agreeing to use algorithm Y in their AI. There wasn’t a mafia-style get-together that said they would all use the algorithm Y. As a side note, one supposes that could indeed happen, but for sake of discussion we are going to put those alternatives to the side for now.
In years past, the possibility of widely using the same algorithms has existed, though more hurdles needed to be surmounted. Today, the use of algorithm storing hubs is almost effortlessly accessed. Open source is more accepted than it was perhaps in prior generations. And so on. A research study described the phenomena this way: “The rise of algorithms used to shape societal choices has been accompanied by concerns over monoculture—the notion that choices and preferences will become homogeneous in the face of algorithmic curation” . They further point out: “Even if algorithms are more accurate on a case-by-case basis, a world in which everyone uses the same algorithm is susceptible to correlated failures when the algorithm finds itself in adverse conditions.
South Africa Latest News, South Africa Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
How can we prevent AI from being racist, sexist and offensive?Artificial intelligence is dogged by issues of racism and sexism because AI tools are trained using content that reflects the biases of its human creators – what can we do about this?
Read more »
Epoxy.ai - Helping make sports viewing and gaming easy and fun for everyone.Epoxy.ai is a platform company that licenses technologies, enabling media and gaming companies to create the most personalized and engaging experiences for their end users.
Read more »
Blake Lemoine Says Google's LaMDA AI Faces 'Bigotry'In an interview with WIRED, the engineer and priest elaborated on his belief that the program is a person—and not Google's property.
Read more »
Opinion | Why did a Google engineer say that its AI was conscious?An artificial intelligence expert explains why a Google engineer was duped, and what sentience would actually look like.
Read more »
Recovering From a Stroke Just Got Easier Thanks to AINeurologists just came up with a a new approach that could make rehabilitation from a stroke simpler and more efficient.
Read more »
Terrifying video shows autonomous robot tank blowing up cars remotelyNew footage of an AI-powered tank showcases the terrifying and autonomous future of warfare by blowing up cars around a testing site.
Read more »