7 Steps to Reflect on How 2019 Went for AI’s Integration into Society
1- AI breaks the Moore’s Law and continue to impress scientists
This year, there are two more fields where AI has smashed the human-level performance level, according to Stanford Annual Report on AI: DeepMind’s Alphastar has won over a professional Starcraft II player, and Detect diabetic retinopathy (DR) outperformed doctors.
AI-enables systems continued to surprise with their performance their creators. For example, Brandon Fornwalt and his colleagues have revealed groundbreaking research findings, proving that their AI Model can predict patient death within a year better than practicing cardiologists, relying only on electrocardiogram (ECG) results, but they do not really know how AI does it.
AI performance impressed even Stanford University researchers, have reported that the technological advances rate of artificial intelligence technology has outpaced Moore’s Law and doubles every 3.4 months.
2-AI systems are on the path to shaking human hegemony
With this rate of growth, AI-enabled systems are carving a path from being a mere helper for a human-being to working alongside with humans, if not more. At least, this year World Intellectual Property Organization (WIPO) has invited the public on numerous occasions to discuss, should we give the AI the same rights as humans in terms of intellectual property.
On one hand, major scientists bet on AI to solve the greatest mysteries of the universe. For example, NASA heliophysicist Madhulika Guhathakurta shared that currently the agency is able to analyse “only a fraction” of the data collected from space crafts, due to limited resources, therefore scientists “need to utilise” AI tools, that is why NASA hopes to bring interpretations of the data from future telescopes or satellites to a new level by using artificial intelligence.
On the other hand, media reports a growing concern among the public for the safety of their incomes. According to Brookings Institute research findings, mid-career employees and technical workers might indeed have something to worry about. This being true, the society will not become more fair. And some professionals refuse the battle at all: for example, this year Go master from South Korean Lee Se-dol and 18-time world champion has announced his retirement from professional Go competitions, attributing his decision to the power of modern AI technology.
3- AI expansions brings more security concerns in cyberspace and real life
Less happy about this rocket-speed rise are the victims of AI-enable cyberattacks, or company executives, who now will have to invest even more in security measures. This year, World Economic Forum shared that “AI-powered cyberattacks are not a hypothetical future concept”:
“There is little doubt that artificial intelligence (AI) will be used by attackers to drive the next major upgrade in cyber weaponry and will ultimately pioneer the malicious use of AI”
The hot case of AI-enabled attack hit the news already this spring, when one of the companies, who decided not to reveal its name, failed to implement measures against synthetic voice attacks: hackers used AI to mimic the CEO voice to make an employee believe in an urgent need to transfer €220,000 ($243,000). On the aftermath of the scandal, security specialists highlighted, that next time it could be a synthetic video call.
And it is not only the attacks that raise the security questions. Researchers from AI firm Kneron has reportedly fooled the facial recognition systems simply using the 3D mask or just a regular photo. Kneron proved the vulnerability of AliPay and WeChat payment systems, and even worse, facial recognition systems at banks, airports, and border crossing. While Apple’s iPhone X remained intact, the experiment raised a legitimate concern on quality of AI-based systems that are widely implemented in various fields.
4-AI brings a whole new precedent in legal systems
While Amazon got away with public apologies, after the company’s Rekognition software “recognised” famous athletes as criminals, and Florida Atlantic University has just built a robotic dog on the base of Boston Dynamic’s quadruped robot, that famously suffered a dramatic onstage death in Las Vegas this summer, other AI developers can face legal consequences of their creations’ failures.
The first widely-known legal action against AI developers was taken by Hong-Kong based Samathur Li Kin-kan, who filed a $23 million lawsuit against Raffaele Costa, CEO and founder of Tyndaris Investments – the company that sold him an automated platform based on a supercomputer called “K1”. The problem is that despite promises on AI-based smart investment decisions, AI has lost a considerable portion of his client’s fortune.
While the verdict is not expected to arrive earlier than in spring 2020, it already has had quite a resonance, with the public debate on the limits of liability of AI-creators, who basically do not control their creations or do not fully understand how the machines are making their decisions.
5-AI has not yet cured itself from cognitive biases
This year ImageNet’s systems once again proved to exhibit problematic classifications, filled with racist and misogynist examples. The real problem was once again revealed by Rashida Richardson’s research findings, proving that systems implemented in the police are not free from racism or corrupt policing either.
Much of biases are to blame on societal “norms” and the fact that many AI systems are trained on historical data, that just transfers cognitive biases to the machines, but AI industry has not progressed much in solving the problem inside its walls. For example, the share of female AI PhD recipients has remained virtually constant at 20% since 2010 in the US, which is far from closing the gender gap.
Stanford researchers survey also revealed that companies that employ AI-enables systems do not implement lots of preventive measures: only 19% of large companies surveyed reported, that their organizations are taking steps to mitigate risks associated with explainability of their algorithms, and 13% – mitigating risks to equity and fairness, such as algorithmic bias and discrimination.
6-AI creates a problem of deep fakes, solves it, but places more issues along the way
This year AI systems for generating synthetic content has become even better, with reports of fake news, videos, images and texts populating the headlines. Moreover, the field grows even further: this December California-based AI-related company OpenAI has released the synthetic text generating “GPT-2” Model, the release of which was previously postponed as too dangerous.
Social media platforms are facing pressure to take actions. For example, Facebook has already enabled AI-tools to detect fake accounts, alongside with launching a deep fake Detection Challenge to stop the spread of fake content. Scientist work fiercely to solve the issue too: this month, for example, researchers from the University of Waterloo in Canada has presented their own AI-powered tool to eradicate fake news.
Yet Witness Lab pointed out to potential challenges of using current technological solutions against deepfakes invasion, identifying no less than 14 dilemmas for public discussion. Researchers warned that if implemented without precautions, these tools could give certain companies more power, raise even more data storage, access and ownership questions, further diminishing people’s ways to control their own data. Moreover, researchers highlighted, that in this matter even implementing blockchain everywhere will hardly solve any problems.
7-AI attracts, and some hope to fake AI until they make it
Venture capital comes in tons to AI-related firms, and some startups try to fake it until they make it, even if at the end behind their AI systems they hide humans, faking the machine performance – like ill-reputed Indian startup Engineer.ai. MMC Ventures reportedly shared that they suspect 40 percent or even more companies that claim to have AI systems to fake it a bit.
Scientific articles grow even faster than VC funding: Stanford University have reported over 300% growth in the volume of peer-reviewed research articles on AI technology this decade, and over 800% increase in some specialised conference attendance rates in a period of 5 years.
Yet scientists start to notice the problem with reproducibility of the research findings. For example, Joelle Pineau has implemented various measures to encourage researchers to fully share their code at Conference on Neural Information Processing Systems this year. In her words, the main challenge is that scientific papers do not always provide all necessary details, or even worse, provide misleading information, so the reported results look more impressive.
Whether AI will cure its biases, or bring more peace and security depends on AI ethics initiatives. If the trend to opening the code will continue, we might at least peak the wheat from the chaff.
But for sure, 2020 will not be short of news on AI. With the accelerating rate of its performance growth, AI will likely outperform humans in several more fields, bring a few scientific breakthroughs, and transform a couple of industries.