DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
  1. DZone
  2. Data Engineering
  3. AI/ML
  4. How to Do ML Without Learning Data Science

How to Do ML Without Learning Data Science

Let's take a look at how to do Machine Learning without learning Data Science. Also explore technology democratization.

Dana Crane user avatar by
Dana Crane
·
Oct. 11, 18 · Opinion
Like (1)
Save
Tweet
Share
4.15K Views

Join the DZone community and get the full member experience.

Join For Free

For all that TensorFlow is the current darling of the Machine Learning (ML) crowd, it’s just a graphing library that represents a series of commands and computations as a graph. Because each node in the graph is an operation and each branch is a value, TensorFlow lends itself well to ML tasks.

Keras, on the other hand, is a front-end for Tensorflow that lets you access all the raw power of Tensorflow while making it far simpler to use. Want to create a neural network that does image classification? If you use Keras, that’s one line of code: just tell it how many layers you want, and Keras builds the classification engine for you.

Technology Democratization

And here’s the rub: as technology becomes more accessible, it gets used by a wider range of people. But it also means that a lot more people are using technology without having a good grasp of the principles and theory behind them. For example, almost everyone has some form of a computing device in the form of a phone, laptop, tablet, etc, but they couldn’t care less how it works. For the majority of us, computing devices are black boxes that we’re content to use as long as they continue to function. And when something does go wrong, we have computing experts to fall back on. However, ML is different.

ML offers us the quintessential black box: it takes in data and spits out conclusions providing no insight into how that output was realized. On the one hand, ML models can deal with complexity and relationships in data that even experts can’t identify. But on the other hand, these ML black boxes present two major issues:

  • Recognizing problems: how do you know if the ML is cheating? In other words, how do you know whether you can trust the results? For example, an image classifier designed to recognize horses was found to be cheating by learning to recognize a copyright tag associated with horse pictures.
  • Diagnosing Problems: if the results are incorrect, how do I diagnose the problem? In other words, if I don’t know what’s causing the error, how can I troubleshoot it? For example, complex models may have 100’s to 1000’s of inputs. Mapping that volume of inputs to outputs is extremely difficult, if not impossible for humans.

Explainable AI

One of the proposed solutions to these problems is Explainable Artificial Intelligence (XAI), which seeks to create a version of ML that can explain in human terms how “black boxes” arrived at their decisions. The goal is to help users better understand whether to trust the results and provide insight into how the conclusion was reached.

XAI is of particular interest to those industries that are subject to regulation, such as housing, finance, and healthcare; markets required to explain how high stakes decisions are made. Additionally, since its possible for data science teams to inadvertently incorporate their biases into their ML models, industries like HR want to be assured those biases don’t come through when an ML model selects hiring candidates, for example.

While there are some companies hoping to make XAI a reality, not everyone is convinced it’s possible in all cases. What do you think? Have your say by joining a discussion in the comments.

Machine learning Data science

Published at DZone with permission of Dana Crane, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Comparing Flutter vs. React Native
  • Hackerman [Comic]
  • 9 Ways You Can Improve Security Posture
  • What Should You Know About Graph Database’s Scalability?

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: