PASSAGE OF THE DAY: "Many are against the regulation of AI because it would slow down innovation. But AI is not like other inventions. A surprising number of people seem to be getting their ideas about AI from Hollywood science fiction movies. Movie plots feature robots increasing in intelligence until they take over the human race. I don’t believe this is the case. There are enough real and present dangers to worry about, from biased ML models to willfully evil human beings. But intelligence without conscience can be dangerous. We are in the age of one person being able to perform lucrative crime at scale. One smart person, or a small team, can disrupt millions. AI will be the last invention humans will ever have to make, believes Oxford researcher Nick Bostrom. In building this new world, humanity could end up crafting its own demise."
-----------------------------------------------------------
COMMENTARY: "Elon Musk: "With artificial intelligence we are summoning the demon," published by Brigette Hyacinth on Linked-In on October 9, 2017. (Brigitte Hyacinth is author of 'The Future of Leadership: Rise of Automation, Robotics and Artificial Intelligence' - a book which aims offering the most comprehensive view of what is taking place in the world of AI and emerging technologies - and at giving valuable insights that will allow reader to successfully navigate the tsunami of technology that is coming our way.)
- Creators are imperfect
- Data was imperfect
- Paradigms to model internal reality was imperfect
- AI outputs are not easily predictable, and they could hurt people and violate Asimov's laws if humans become an obstacle for AI.
- AI is loyal to what it wants to optimize. If human well-being is not included in the optimization and internal model of reality, AI may become a danger.
"Technology can be both a blessing and a curse."History teaches many lessons. Alfred Nobel, who invented the dynamite, was revolutionary in fuelling the global industrial and economic revolutions. It was designed to accelerate the mining of resources and building of infrastructure. However, to Nobel's displeasure, it was also used for destruction and taking lives in wars across the globe. Elon Musk has been outspoken about the dangers of AI without regulation. For such cautioning to come from someone working on technological breakthroughs in space exploration, electric vehicle development, and sustainable energy generation, it is unsettling. It appears Elon Musk doesn’t want to let the cat out of the bag but I believe he has access to classified information that has not been disclosed. He wants to raise awareness and establish guard-rails to make sure innovation does not recklessly run away at the detriment of safety, security, and privacy. He is simply raising concerns for ethical development as AI is progressing so fast, that reactive regulation will not be enough. Many are against the regulation of AI because it would slow down innovation. But AI is not like other inventions. A surprising number of people seem to be getting their ideas about AI from Hollywood science fiction movies. Movie plots feature robots increasing in intelligence until they take over the human race. I don’t believe this is the case. There are enough real and present dangers to worry about, from biased ML models to willfully evil human beings. But intelligence without conscience can be dangerous. We are in the age of one person being able to perform lucrative crime at scale. One smart person, or a small team, can disrupt millions. AI will be the last invention humans will ever have to make, believes Oxford researcher Nick Bostrom. In building this new world, humanity could end up crafting its own demise."
https://www.linkedin.com/pulse/artificial-intelligence-summoning-demon-brigette-hyacinth
------------------------------------------------------------
STORY: "Uncritical reliance’ on AI in criminal justice could lead to ‘wrong decisions’, says Law Society,' by law and current affairs writer Polly Botsford, published by
'Legal Cheek' on June 5, 2019.
GIST: “An uncritical reliance on tech” in the justice system is raising alarm bells for the Law Society. In a report published this week,
Chancery Lane highlights a lack of accountability and transparency
alongside potential human rights challenges of algorithms such as facial
recognition, predictive crime mapping, and mobile phone data extraction
being developed by the police, prisons and border forces. There are increasing concerns about police forces piloting facial
recognition technology that can, for instance, cross-reference someone
at a particular public event with crime data, or algorithms that predict
the level of risk of an individual committing further crimes over a
given time period. Christina Blacklaws, president of the Law Society, said:
“Complex algorithms are crunching data to help officials make judgement calls about all sorts of things … [and] … while there are obvious efficiency wins, there is a worrying lack of oversight or framework to mitigate some hefty risks … that may be unwittingly built in by an operator.”The 80-page report, authored by a commission set up by the Law Society last year, sets out the challenges that algorithms raise such as bias and discrimination. Because algorithms “encode assumptions and systematic patterns” they can reinforce and then embed discriminations. It reads: “If, as is commonly known, the justice system does under-serve certain populations or over-police others, these biases will be reflected in the data, meaning it will be a biased measurement of the phenomena of interest, such as criminal activity.” There is also a concern that different government agencies are not talking to each other, as Blacklaws puts it: “Police, prisons and border forces are innovating in silos to help them manage and use the vast quantities of data they hold about people, places and events” but there is an “absence of … centralised coordination or systematic knowledge-sharing between public bodies.” Chancery Lane makes a number of recommendations as a result of the research findings including ensuring that public bodies rather than tech companies take ownership of the software involved, and setting up a National Register of Algorithmic Systems as an “initial scaffold for further openness, cross-sector learning and scrutiny.” The commission also mapped all the known algorithms currently being deployed or developed by police in England and Wales. The commission included members of the Law Society alongside academics, as well as Andrea Coomber from all-party law reform and human rights organisation, Justice.
PUBLISHER'S NOTE: I am monitoring this case/issue. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic" section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/