An impressive feature of Artificial Intelligence (A.I.) is the technology's ability to provide computational power to create cognition in machines. Yet A.I. critics today have become concerned that many artificial intelligence projects are centrally controlled and therefore producing "Narrow A.I."  

Unlike human cognition, narrow A.I. is not conscious or driven by emotion. Rather, narrow A.I. operates within a pre-determined, pre-defined range, even if it appears to be much more sophisticated than that. Virtual assistants like OK Google, Apple's Siri and Amazon's Alexa exhibit examples of narrow A.I. While these A.I.-based systems are able to communicate with users and answer questions, these machines are nowhere close to having human-like intelligence.

According to Arif Khan, V.P. of Marketing at SingularityNET, centrally controlled A.I. projects led by large tech companies have resulted in the creation of narrow data sets, which could be harmful for the future of artificial intelligence.


Let’s say Facebook wants to develop A.I. algorithms. Facebook's big data sets will never be disclosed to a competitor, as it is in Facebook's best interest to keep this information private. In turn, Facebook would never have access to data sets from their competitors. Moreover, the data sets that Facebook builds upon would contain private information, most likely to be used for their own benefits to drive shareholder value,” Khan explained. “Yet if Facebook created an A.I. with the penultimate aim of optimizing shareholder value, Facebook's AI algorithms would drive user behavior in a specific direction creating a pigeonholed view of society powered by these algorithms. For example, humans might share fake news, resulting in Facebook's algorithms furthering the proliferation of such content. If you just have data sets, and algorithms that are biased towards creating shareholder value through ad clicks, what will be created might not benefit the overall good of society."

For example, in 2016 Microsoft launched an A.I. based Twitter bot named “Tay” as an experiment in “controversial understanding.” Microsoft noted that the more users chatted with Tay, the smarter the bot became, learning to engage with humans through “casual and playful” conversation.