Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

DZone Research: Concerns About AI

DZone's Guide to

DZone Research: Concerns About AI

Let's gather some insights on the state of artificial intelligence and take a look at hype, ethics, and security.

· AI Zone ·
Free Resource

Did you know that 50- 80% of your enterprise business processes can be automated with AssistEdge?  Identify processes, deploy bots and scale effortlessly with AssistEdge.

To gather insights on the state of artificial intelligence (AI), and all of its sub-segments – machine learning (ML), natural language processing (NLP), deep learning (DL), robotic process automation (RPA), regression, et al, we talked to 21 executives who are implementing AI in their own organization and helping others understand how AI can help their business. We began by asking, "What are your biggest concerns regarding AI today?" Here's what they told us:

Hype

  • A fair amount of hype and excitement. The trough of disillusionment and then rise again. We focus on putting AI into production because we believe this is the biggest challenge.
  • 1) The Knowledge Gap — When it comes to combining the business process knowledge with the technical knowledge, there can often be a disconnect between understanding how companies are run and where A.I. implementation can be best leveraged. 2) Hype Cycle — With the hype around A.I. dying down, the technology is now viewed as a more tangible application. However, many are still talking about A.I. from purely a technology aspect, but the technology needs to be an enabler of an outcome. 3) Trust — While most CIOs and CFOs are eager to experiment with AI technologies, not all are yet fully onboard when it comes to the full investment of complete adoption and implementation. There is still a fair amount of concerns surrounding the lack of knowledge and ability to have the right combination of A.I. technology.
  • Currently, there is a great deal of excitement and hype around AI, which often translates to over-inflated expectations. AI is in its infancy; there is much to be learned and done. One of my concerns is that the reality of the long road ahead will cause many people to lose interest. Another concern is that many will view AI through the same lens as current analytical approaches to problem-solving, applying the same logic, tools, and infrastructure. AI requires thinking differently about the IT infrastructure. The scale of compute power and data storage required is unprecedented. Autonomous vehicles can easily generate 100s of petabytes of data per year, all of which must be stored and analyzed to improve the underlying algorithms. Current practice is to copy data from shared storage to individual servers with SSDs for faster processing, returning the results to shared storage once processing is complete. Such practice is extremely costly and inefficient when shared storage systems like WekaIO’s Matrix can support AI workloads from a common data pool.
  • One concern is that AI can be over-hyped. AI is a great technology and solves many problems, but we’re a long way from AI curing cancer or relieving the world of war and famine. But we can have a positive impact in terms of solving other real-world problems, and our hope is that people embrace this impact in a way that enables us to continue building for the future. In addition, with the wealth of new virtual assistants on the market today, we need to be cautious with consumer burnout and confusion. That’s why we created our cognitive arbitrator to connect these disparate virtual assistants and make it easier to interact. Lastly, because of the dependency of AI on data, questions about data privacy are becoming more relevant. It is important that we use data very responsibly and we draw a clear line between using data to improve the AI for a task and abusing it for other purposes.

Ethics

  • Worry about people who use it invasively. People who use it as an argument to collect data. You have to worry about of all of these arguments about the next phase of evolution. I’m pretty sure that’s not going to happen the way people think it will. The idea of artificial entities having different motives than we do and do things for their own reasons made up of many parts is a very old one. We will have entities that are different. It’s not going to be things like us in silicon cooperating with humans wind up in different social structures with humans doing stuff and machines doing stuff. We can build a place with huge inequalities or we can build a place where these capabilities make the world a better place and that’s really a choice we have to make by deciding how kind of businesses and practices we want to support. We should all be more mindful of what’s going on in the world — opportunity and also a caution which is not such a bad thing. Stop and think about how the world works. 
  • There is a lot of talk about “AI for good” today, but simply wishing for the best as we develop new technology is not enough. We, as an entire society, need to rethink how the future will look so that we can all be prepared for changes to public safety and the workforce. We need to be sure that we are taking care of each member of society, and that AI can be used to benefit the whole spectrum, not just the top corporations. 
  • The biggest worry about AI is about ethics. When AI has to make the tradeoff decisions that affect users, how are options weighed? How is this coded mathematically? These questions will affect the concerns over AI.

Security

  • There is an arms race going on in security. Hackers are continually becoming more and more sophisticated and finding increasingly clever ways to bypass safeguards. AI is essential to solving that problem. Providing AI that can quickly learn to adapt to a changing adversary requires smarter systems. I think education is also really critical. It is important to remember that these systems are not flawless and therefore there will be mistakes in classification. It is also important that whoever is using a system includes human-performed verification or sampling to identify things that the AI system may need to improve upon. 
  • With GDPR rolling out and thinking about the black box with DL important to track audit and have trails for decisions that were made. We are looking at audit trails and data lineage as it relates to AI. 
  • No privacy concerns about digital print at the edge it learns the value prop no way to go back to who was queued up at the traffic light. Powerful privacy implication of getting rid of the data at the edge quickly. 
  • Buzzword fatigue is a concern. Area starting to demo value. Security is not an obstacle, but it is a concern. The more you do on the edge the safer it is.

Other

  • I expected adoption to be faster. We strive to be a catalyst in the process.
  • There’s a misconception that AI will take over jobs.  AI will augment humans and make them more productive. The human will never be replaced more effective and less error-prone.
  • With most new technologies, the early experiences people have with AI may not be very good and will cause people to be hesitant about using it down the line. For example, one of the things we saw with the early deployment of chatbots was that it was good with a simple task but couldn’t take on a more complex request leaving users frustrated. Early experiences may slow down the positive and productive progress we need to make to make AI seamless. But I believe we will see the momentum around the usage of AI continue as the machines - and we - get better at implementation.
  • The successful implementation of an AI solution depends on the accuracy of the model and completeness of data. Some of the big concerns we see if the challenges with collecting the required data from different sources near real-time and at a big data scale, understanding the data lineage and relationships between them and keeping the algorithmic model up to date and relevant for business use cases. The reliability and adaptability of the model is key which takes time and multiple iterations for maturity.
  • Signal to noise ratio. Executives are going to overdose on snake oil and there will be a backlash against AI as a whole. Those of us doing legitimate work will get painted with the same broad brush as the swindler. Swindlers are certainly in the majority.

Here's who we spoke to:

  • Assaf Gad, Vice President and Strategic Partnerships, Audioburst
  • Tyler Foxworthy, Chief Scientist, DemandJump
  • Patric Palm, CEO, Favro
  • Sameer Padhye, CEO, FixStream
  • Matthew Tillman, CEO, Haven
  • Dipti Borkar, V.P. Product Marketing, Kinetica
  • Ted Dunning, Chief Application Architect, MapR
  • Jeff Aaron, VP Marketing and Ebrahim Safavi, Data Scientist, Mist Systems
  • Dominic Wellington, Global IT Evangelist, Moogsoft
  • Dr. Nils Lenke, Director, Corporate Research, Nuance Communications
  • Mark Gamble, Senior Director of Product Marketing, OpenText
  • Sri Ramanathan, Group Vice President of Mobile, Oracle
  • Sivan Metzger, CEO and Co-founder, ParallelM
  • Nisha Talagala, CTO and Co-founder, ParallelM
  • Stuart Feffer, Co-founder and CEO, Reality AI
  • Sven Denecken, SVP Head of Product Management, SAP S/4 Hana Cloud
  • Steve Sloan, Chief Product Officer, SendGrid
  • Simon Crosby, CTO, Swim
  • Liran Zvibel, CEO and Co-founder, WekaIO
  • Daniel DeMillard, A.I. Architect, zvelo
  • Consuming AI in byte sized applications is the best way to transform digitally. #BuiltOnAI, EdgeVerve’s business application, provides you with everything you need to plug & play AI into your enterprise.  Learn more.

    Topics:
    artificial intelligence ,machine learning ,deep learning ,natural language processing ,robotic process automation

    Opinions expressed by DZone contributors are their own.

    {{ parent.title || parent.header.title}}

    {{ parent.tldr }}

    {{ parent.urlSource.name }}