Harnessing the power of artificial intelligence has proven to be useful in various fields for example in military activities.
The recent move by the United States military to support artificial intelligence research best displays the importance of this groundbreaking technology.
The research will involve various fields including networks that can identify images made by other AIs and systems with Common Sense.
David Gunning, the programme manager, said that the US Defence Department research agency (Darpa), which is well-known for supporting early work on self-driving vehicles and the Internet recently said that it has been working with outside researchers on artificial intelligence systems with the power of adapting to unforeseen circumstances.
He added that the research could bring out the creation of machines that have common sense, thus leading to better adaptability and flexibility as well as the ability to interact with people naturally.
According to the Financial Times, the current artificial intelligence (AI) systems are brittle and can deal with issues that fall outside the narrow scope that they were built to handle.
AI ‘Common Sense’
Recently, artificial intelligence has grown into a popular way of automating repetitive or complex activities such as identifying patterns in huge volumes of data. However, there has been a re-birth in research that is aimed at giving machines an instinctive awareness of the world.
What’s more, Paul Allen, Microsoft’s co-founder, doubled his investment into his artificial intelligence (AI) research institute this year in a bid to concentrate more on this promising idea.
Typically, Darpa supports a wide range of both academic and commercial groups to research on its behalf.
Recently, such undertaking triggered an internal hostile response at Google over the Internet guru’s military activities on image recognition.
Even so, David Gunning asserted that the US Defence Department research agency (Darpa) called together brought together third-party artificial intelligence (AI) researchers early this year for a brainstorming session, particularly on AI ‘common sense’.
Thanks to that gathering, the agency is currently piecing together a formal proposal related to the project.
In another project, Darpa intends to bring together forensic specialists this summer in an attempt to evaluate technologies that are geared towards identifying artificially produced images.
The images are referred to as deepfakes since they depend on deep learning methods.
Deepfakes normally involves the projecting of a person’s face onto the body of another, and the results can be amazingly realistic.
A new technique that is of concern now entails the use of generative adversarial networks or better known as GANs, which are designed to bypass automated techniques for detecting forgery.
David Gunning said to the MIT Technology Review that Darpa intends to create more advanced detection techniques out of the concern that generative adversarial networks(GANs) could be utilized for misinformation purposes.
During the event this summer, the agency wants the forensic professionals to compete against each other at generating the most convincing artificial intelligence (AI)-produced fake video, imagery and audio as well as identifying the counterfeits automatically.
Aside from Darpa’s efforts, Google has made a remarkable milestone by creating natural-seeing artificial intelligence technologies.
In fact, the company built a system that conducts automated phone conversations earlier this year.