As AI gains increased penetration in the healthcare industry, researchers at the New York-based Mt. Sinai’s Icahn School of Medicine have a unique collaboration with the hospital’s in-house AI system, Deep Patient. The researchers are acknowledged for teaching the system how to predict risk factors for 78 distinct diseases through feeding it with electronic-based records drawn from 700,000 patients.
Deep Patient has proven to be more than a simple program. Similar to other advanced artificial intelligence (AI) systems, it learns, makes independent decisions, and has developed from a technological tool to a partner that collaborates and coordinates with humans. In fact, four out of five executives who participated in Accenture’s latest Tech Vision report cited that they believe AI would work next to human beings in their organizations in the upcoming three years.
Currently, AI acts as the public face of some organizations’ business, whereby it handles various tasks including initial interactions and customer service roles. However, businesses seeking to leverage the potential must also acknowledge its power. They ought to mold their AI systems to act responsibly and reflect both societal and company norms of transparency, fairness, and responsibility.
Previously, early expert systems, statistical regressions and data analytics programs based on rules run AI. However, the outburst of powerful deep neural networks currently provides AI systems the capacity to carry out the unexpected, which mere programs lack. To deal with the new responsibility, businesses can focus on the progress of human development for excellent guidance.
For starters, humans begin by learning how to learn and then explaining their actions and thoughts before eventually accepting responsibility for their choices. Similarly, AI starts by learning basic principles before progressing to create its skills from a set of taxonomical structures. As such, the company that possesses the best data for training AI how to undertake its job creates the most powerful AI system.
Google recently unveiled an open source dataset that can assist companies in teaching their AI to comprehend how people talk. In this case, it recorded about 65,000 videos of thousands of individuals speaking in an attempt to build a dataset that would prepare an artificial intelligence system to comprehend just 30 words in one language.
Businesses that are creating their AI systems have to make sure that there is an approved background of understanding for the AI and other parties that will be interacting with it including other AIs, employees or customers. They must also take care when choosing training data and taxonomies, as the process is more about reducing data bias as opposed to just scale.
Audi has revealed that it would assume liability for accidents involving its 2019 A8 model, mainly when its “Traffic Jam Pilot” system is in use. Also, the German federal government has adopted rules that are ahead of time regarding the way self-driving cars ought to act in an inevitable accident.
Leaders are expected to accept the obstacles of raising AI in a manner that acknowledges its new impact and roles in society. This scenario will set the bar for what it entails to build a responsible, explainable AI system while simultaneously building trust with employees and customers. As a result, the move will usher a vital step in integrating AI into society, Citizen AI.